Dariusz on Software Quality & Performance

16/03/2012

What's Wrong With Automated Integration Tests?

Filed under: en — Tags: , , — dariusz.cieslak @

A quite typical picture: software development company X, first delivery of project Y to testing team after few months of coding just failed because software is so buggy and crashes so often. Project Manager decides to invest some money in automated testing tool that will solve all stability problems with click-and-replay interface. Demos are very impressive. The tool was integrated and a bunch of tests were "clicked".

After a month we have 10% of tests that are failing. 10% is not a big deal, we can live with them. After additional month 30% of tests fails because important screen design was changed and some tests cannot authorize them for some reason. Pressure for next delivery increases, chances to delegate some testers to fix failing test cases are smaller every week.

What are the final results of above tool?

  • unmaintained set of tests (and the tool itself) is abandoned
  • man-days lost for setting up test cases
  • $$ lost for the tool and training

Has the tool been introduced too late? Maybe wrong tool was selected?

In my opinion automation and integration tests don't play well together. Let's review then main enemies of automation in integration tests:

Initial state setup of environment and system itself

For unit-level tests you can easily control local environment by setting up mocks to isolate other parts of system. If you want test integration issues you have to face with integration-level complexity.

No more simple setups! If you want to setup state properly to get expected result you MUST explicitly set state whole environment. Sometimes it's just impossible or extremely complicated to set it using UI. If you set states improperly (or just accept existing state as a starting point) you will end with random result changes that will make tests useless.

Result retrieval after operation

OK, we scheduled action A and want to check if record was updated in DB or not. In order to do that some list view is opened and record is located using search. Then record is opened and we can add expectations to that screen.

OK, but if we want to check if "e-mail was sent"? We cannot see that in application UI. Catching on SMTP level will be too unreliable and slow (timings).

It will just not work smoothly.

What's next then?

It's easy to criticize everything without proposing (and implementing) efficient alternatives.

My vision of integration testing:

  • I'm not checking results for automated integration testing, it will just doesn't work cannot be maintained efficiently
  • I'm usually generating random input to cover as much functionality as possible using high level interfaces (UI)
  • I depend on internal assertions/design by contract to catch problems during such random-based testing, they serve as an oracle (the more the results are better)

5 Comments »

  1. Very nice post. Automated Software Testing is always an interesting topic. Thank you for sharing it with us.

    Comment by QA Thought Leaders — 20/03/2012 @

  2. Not sure which suite of tools you used. I suppose a fixed virtual machine image (testing environment) with things like Sikuli should make this a lot more testable. Vagrant + Jenkins can be used to make this very managable (with an admittedly high upfront set up time-cost, though. Typical trade-off when you use an open-source stack :P

    Comment by Chris — 18/07/2012 @

  3. I found the post insteresting even though it still doesnt answer the question I have which is "Is It a good idea to use both integration and automatied tests, or which is a better approach for functionality testing".

    Comment by manuel cerda — 15/10/2012 @

  4. Didn't quite follow why you mean that automated integration tests do not work. I guess this is true when you write your integration tests coupled with UI. You could write more non-UI integration tests (in addition to functional tests which tests at class/method levels) and keep UI based integration tests to the minimum (these should test only UI integration). Also use a UI testing framework which do not follow the classical "record-replay" approach.

    Comment by Nirmal Sasidharan — 22/11/2013 @

  5. The on-going maintenance of automated tests is where I find most of my bugs, and it's also a part of what helps keep me focused on what is changing in the application and when. It would be great to write some automation and have it be 'done' once and for all, but that's really only possible with an extremely stable project, in my opinion. Also, some things clearly lend themselves more easily to automation than others, so often times I use automation and manual testing hand-in-hand (generate emails via user interface, check end result manually, for example). This approach fails to attain the dream of some to 'automate it all' but I don't think that's really a worthwhile goal. The bottom line for me is that all automated tests are essentially 'disposable' due to the constantly moving target of the application under test. The greater value lies in the framework behind the tests, the base classes and page objects. That's where you can really codify the hard won experience and knowledge of the application into methods and components which can be used to create more tests more quickly over time.

    Comment by SKent — 24/11/2014 @

RSS feed for comments on this post. TrackBack URL

Leave a comment

Powered by WordPress