Guess who is the competitor for the FogBugz bug tracker:
But if you click on AdWords box you can see the following image (no server available):
Note the weird address: https://www.www.atlassian.com/software/jira/… It's the cause for invalid target link.
Of course it's a bug in configuration done by someone from Atlassian, but let's stop laughing and check what might be the final resolution for such type of problems:
- Google / AdWords: couldn't they just check (HTTP 200) any target address passed by theirs customer? It's a very simple change in the service
- Atlassian: automatic HTTP log scan won't be possible as web server hasn't been reached (DNS resolution phase), but 100% bounce rate should raise a warning
Anyway, dealing with such errors in non-systematic way (fixing just this error) is dangerous as further instances are not blocked. It's better to have an automatic process that helps with such errors exposure in the future.
We, at Aplikacja.info believe that bugs should be eliminated systematically i.e. every missed one should have proper effect in process change. The more "sensitive" process (more bugs exposed by the process) the smaller amount of bugs are left in the end. For example coding in PHP with massive refactorings is a trouble-maker (as the language has no static checking included and requires high level of test coverage to uncover every error). Any piece of internal check (only method names + parameter numbers done by lint-like tools) will help a lot there.
My customer develops software for embedded devices using Linux. In the very beginning of this project we faced low stability of builds produced due to complicated nature of platform (C++, manual memory management, new APIs to learn). Of course QA located such bugs, but it was triggered before release on release branches.
In order to support QA and track current state of software stability I added automated random tests feature:
Every build produced by the build system hits testing devices automatically and crash / failed asserts / warnings reports are published to developers (daily, in aggregated form).
A quite typical picture: software development company X, first delivery of project Y to testing team after few months of coding just failed because software is so buggy and crashes so often. Project Manager decides to invest some money in automated testing tool that will solve all stability problems with click-and-replay interface. Demos are very impressive. The tool was integrated and a bunch of tests were "clicked".
After a month we have 10% of tests that are failing. 10% is not a big deal, we can live with them. After additional month 30% of tests fails because important screen design was changed and some tests cannot authorize them for some reason. Pressure for next delivery increases, chances to delegate some testers to fix failing test cases are smaller every week.
What are the final results of above tool?
- unmaintained set of tests (and the tool itself) is abandoned
- man-days lost for setting up test cases
- $$ lost for the tool and training
Has the tool been introduced too late? Maybe wrong tool was selected?
In my opinion automation and integration tests don't play well together. Let's review then main enemies of automation in integration tests:
Recently I forgot to add #reviewthis directive for modifications of codebase that belongs to team A. And a subtle bug was introduced that way. Ops! I agreed earlier that all changes done to moduleB should be passed to a reviewer that will do peer review for that particular change. What a shame (We are using excellent GitHub's review mechanism, BTW).
How to avoid that situation in a future? Should I rely on my memory? Is it possible for a human to track so many small rules manually? My intuition tells me that enforcement of those small ruleset should be automated.
GIT allows you to specify so called "commit hooks" that can validate many stages of GIT workflow. I'll use simplest local verification of commit message, first the rule in plain text:
If you are changing moduleB you should notify developerC about this change
My current project I'm working on is based on embedded systems and QT platform. Of course the very first task in the project is to implement some kind of testing method to get feedback on software quality. The test system is composed from few components:
- Automatic crash reports collected on central server
- Automatic random test runners connected to always-running (24/7) devices to catch crashes
First channel collects all crashes (from human and automated tests), second channel is performed fully automatically. Second channel allows to measure MMTF (mean time between failures) and analyse changes in time, probably helping with estimating current software quality state.
Second testing channel requires automatic test driver to inject random UI events (key presses, remote in my case). I used QT message queue for that purpose: