My customer develops software for embedded devices using Linux. In the very beginning of this project we faced low stability of builds produced due to complicated nature of platform (C++, manual memory management, new APIs to learn). Of course QA located such bugs, but it was triggered before release on release branches.
In order to support QA and track current state of software stability I added automated random tests feature:
Every build produced by the build system hits testing devices automatically and crash / failed asserts / warnings reports are published to developers (daily, in aggregated form).
We tracked every release branch (assuming it's the most important point for quality measurements). The situation was like on diagram below.
The obvious problem you may notice is related to multiple release branches that must be tested (some support branches live for few months). For every new branch you have to setup new testing device (I decided to attach one device per one tested version). This solution was not scalable and not very easy to maintain.
If we assume that every fix done for release branch (2.0.3 for example) hits at last his master branch (2.0) it's obvious that potential regressions can be tracked automatically using only master branches:
- regression cause detection during normal development (on master branch) is done more accurately – we are able to say that given problem was added new change X on day Y near hour Z – it's visible from automatic tests outcomes (crashes / failed asserts starting from version X)
- less reconfiguration overhead – main development lines are changed less frequently that release branches
- more resources for multi-platform tests: if there's less points in source tree to track we can focus on duplicating hardware for automatic tests – some bugs are hardware-related
- we don't track exact released versions: that's true, but we have QA for this purpose (to test final builds)