A quite typical picture: software development company X, first delivery of project Y to testing team after few months of coding just failed because software is so buggy and crashes so often. Project Manager decides to invest some money in automated testing tool that will solve all stability problems with click-and-replay interface. Demos are very impressive. The tool was integrated and a bunch of tests were "clicked".
After a month we have 10% of tests that are failing. 10% is not a big deal, we can live with them. After additional month 30% of tests fails because important screen design was changed and some tests cannot authorize them for some reason. Pressure for next delivery increases, chances to delegate some testers to fix failing test cases are smaller every week.
What are the final results of above tool?
- unmaintained set of tests (and the tool itself) is abandoned
- man-days lost for setting up test cases
- $$ lost for the tool and training
Has the tool been introduced too late? Maybe wrong tool was selected?
In my opinion automation and integration tests don't play well together. Let's review then main enemies of automation in integration tests:
Initial state setup of environment and system itself
For unit-level tests you can easily control local environment by setting up mocks to isolate other parts of system. If you want test integration issues you have to face with integration-level complexity.
No more simple setups! If you want to setup state properly to get expected result you MUST explicitly set state whole environment. Sometimes it's just impossible or extremely complicated to set it using UI. If you set states improperly (or just accept existing state as a starting point) you will end with random result changes that will make tests useless.
Result retrieval after operation
OK, we scheduled action A and want to check if record was updated in DB or not. In order to do that some list view is opened and record is located using search. Then record is opened and we can add expectations to that screen.
OK, but if we want to check if "e-mail was sent"? We cannot see that in application UI. Catching on SMTP level will be too unreliable and slow (timings).
It will just not work smoothly.
What's next then?
It's easy to criticize everything without proposing (and implementing) efficient alternatives.
My vision of integration testing:
- I'm not checking results for automated integration testing, it will just doesn't work cannot be maintained efficiently
- I'm usually generating random input to cover as much functionality as possible using high level interfaces (UI)
- I depend on internal assertions/design by contract to catch problems during such random-based testing, they serve as an oracle (the more the results are better)