What's Wrong With Automated Integration Tests?

A quite typical picture: software development company X, first delivery of project Y to testing team after few months of coding just failed because software is so buggy and crashes so often. Project Manager decides to invest some money in automated testing tool that will solve all stability problems with click-and-replay interface. Demos are very impressive. The tool was integrated and a bunch of tests were "clicked".

After a month we have 10% of tests that are failing. 10% is not a big deal, we can live with it. After additional month 30% of tests fails because important screen design was changed and some tests cannot authorize themselves in application "for some reason". Pressure for next delivery increases, chances to delegate some testers to fix failing test cases are smaller every week.

What are the final results of above tool?

  • unmaintained set of tests (and the tool itself) is abandoned
  • man-days lost for setting up test cases
  • $$ lost for the tool and training

Has the tool been introduced too late? Maybe wrong tool was selected?

In my opinion automation and integration tests don't play well together. Let's review then main enemies of automation in integration tests:

Initial state setup of environment and system itself

For unit-level tests you can easily control local environment by setting up mocks to isolate other parts of system. If you want test integration issues you have to face with integration-level complexity.

No more simple setups! If you want to setup state properly to get expected result you MUST explicitly set state of whole environment. Sometimes it's just impossible or extremely complicated to set it using UI. If you set states improperly (or just accept existing state as a starting point) you will end with random result changes that will make tests useless (depending on the order of tests or just a state left after previous test run).

Result retrieval after operation

OK, we scheduled action A and want to check if record was updated in DB or not. In order to do that some list view is opened and record is located using search. Then record is opened and we can add expectations to that screen. We cross the UI layer two times: one for update operation, second time for result verification. Are we sure the state has been really preserved in database?

Secondly, if we want to check if "e-mail was sent" (an example of external state change): We cannot see that event in application UI. On the other hand catching on SMTP level will be too unreliable and slow (timings). Without mocks it's hard to deliver fast solution there. And mocks usage mean it's not an E2E test, but some kind of semi-integration test.

It will just not work smoothly.

What's next then?

It's easy to criticize everything without proposing (and implementing) efficient alternatives.

So my vision of integration testing:

  • I'm not checking results for automated integration testing, it will just doesn't work cannot be maintained efficiently
  • I'm usually generating random input to cover as much functionality as possible using high level interfaces (UI)
  • I depend on internal assertions/design by contract to catch problems during such random-based testing, they serve as an oracle (the more the results are better)
  • More complicated properties could be specified (and tested) by some temporal logic (this technology is not prepared, yet)
This entry was posted in en and tagged , , . Bookmark the permalink.

5 Responses to What's Wrong With Automated Integration Tests?

  1. Very nice post. Automated Software Testing is always an interesting topic. Thank you for sharing it with us.

  2. Chris says:

    Not sure which suite of tools you used. I suppose a fixed virtual machine image (testing environment) with things like Sikuli should make this a lot more testable. Vagrant + Jenkins can be used to make this very managable (with an admittedly high upfront set up time-cost, though. Typical trade-off when you use an open-source stack 😛

  3. manuel cerda says:

    I found the post insteresting even though it still doesnt answer the question I have which is "Is It a good idea to use both integration and automatied tests, or which is a better approach for functionality testing".

  4. Nirmal Sasidharan says:

    Didn't quite follow why you mean that automated integration tests do not work. I guess this is true when you write your integration tests coupled with UI. You could write more non-UI integration tests (in addition to functional tests which tests at class/method levels) and keep UI based integration tests to the minimum (these should test only UI integration). Also use a UI testing framework which do not follow the classical "record-replay" approach.

  5. SKent says:

    The on-going maintenance of automated tests is where I find most of my bugs, and it's also a part of what helps keep me focused on what is changing in the application and when. It would be great to write some automation and have it be 'done' once and for all, but that's really only possible with an extremely stable project, in my opinion. Also, some things clearly lend themselves more easily to automation than others, so often times I use automation and manual testing hand-in-hand (generate emails via user interface, check end result manually, for example). This approach fails to attain the dream of some to 'automate it all' but I don't think that's really a worthwhile goal. The bottom line for me is that all automated tests are essentially 'disposable' due to the constantly moving target of the application under test. The greater value lies in the framework behind the tests, the base classes and page objects. That's where you can really codify the hard won experience and knowledge of the application into methods and components which can be used to create more tests more quickly over time.

Comments are closed.