Are you test-infected? Learned already how to grow your server-side apps using unit testing and want to do the same with client (HTML) layer? Search no more! QUnit to the rescue!
<link rel="stylesheet" href="/resources/qunit.css">
Then you can start writing your tests, the simplest one, taken from documentation:
Automated unit tests are hard to write. Software architecture must be designed carefully to allow unit testing. You have to spend time to write tests as well and it's not easy to write good tests. It's easy to make big mess that is hard to maintain after few weeks.
On the other hand automated integration tests are hard to maintain and are fragile. You can "write" them pretty easy in a record-and-replay tool, but later they show their real cost during maintenance.
But there's an answer for problems mentioned above. Do you know Eiffel language? The language has special built-in constructs that support executable contract specification. It's called Design By Contract (DBC). DBC is easier to write and to maintain because no special test cases need to be specified, just conditions for method parameters, expected results and state invariant that must be preserved. How DBC can substitute old-fashioned tests? Read on!
A new site has been just born: RandomTest.net. Based on current experiences in IPTV industry I'm going to create set of libraries for different environments that will allow to:
- make random input to any application (web-based, smartphone, thick client-type, …)
- collect errors found
- send them to central server
- prepare useful reports for stability analysis
Manual integration tests are expensive, unit developers tests are hard to implement properly and your latest click’n'play tool requires more and more maintenance effort for failing test cases along development?
What if we forget for a moment about scripting your UI by static scripts and replace it with totally random input? You will get for free coverage in almost whole application. “Wait” – you will say - “but there’s no way to check results as the input is random, is there?”.
Sure, here Design By Contract + Continuous Integration comes to play. You embed assertions in your system in many places. Failed assertions do not crash application, but are reported immediately to central server and aggregated into reports every day. Crashes, warnings, errors are reported, too. Then you can measure quality of your system-under-test by observing changes in number of errors, day by day. No scripting required to cover any new line of code! It will be tested automatically!
The source code will be open, the project has just started on github.com.
By searching for already existing implementations of random input-based testing attempts I've located the following material that describes possible level of implementation of that idea using web browser code tests:
Udacity splits possible implementation into the following levels
, I've added interpretation in case if there's no browser alone under test, but the server side application:
- verifying only HTTP protocol – HTTP errors 500, 404 will be handled on that level
- HTML level checks – any problems with tags nesting, SGML/XML syntax can be caught there
- rendering – can check any layout-related issue (overlapping divs)
- forms and scripting – checks application logic using client-side language and local state (forms and cookies)
By testing on any of above levels you select application coverage vs level coverage. My idea is to connect random input on all above input levels with validation of every level contracts plus internal contracts check on server. Then we will have the following output:
- caught any HTTP, HTML, rendering (I know it might be hard to automate) and state-related error (not easy to define error here, though)
- collect any assertion, warning, error, crash etc. from server side application with full stacktraces and aggregation
My customer develops software for embedded devices using Linux. In the very beginning of this project we faced low stability of builds produced due to complicated nature of platform (C++, manual memory management, new APIs to learn). Of course QA located such bugs, but it was triggered before release on release branches.
In order to support QA and track current state of software stability I added automated random tests feature:
Every build produced by the build system hits testing devices automatically and crash / failed asserts / warnings reports are published to developers (daily, in aggregated form).