By searching for already existing implementations of random input-based testing attempts I've located the following material that describes possible level of implementation of that idea using web browser code tests:
Udacity splits possible implementation into the following levels
, I've added interpretation in case if there's no browser alone under test, but the server side application:
- verifying only HTTP protocol – HTTP errors 500, 404 will be handled on that level
- HTML level checks – any problems with tags nesting, SGML/XML syntax can be caught there
- rendering – can check any layout-related issue (overlapping divs)
- forms and scripting – checks application logic using client-side language and local state (forms and cookies)
By testing on any of above levels you select application coverage vs level coverage. My idea is to connect random input on all above input levels with validation of every level contracts plus internal contracts check on server. Then we will have the following output:
- caught any HTTP, HTML, rendering (I know it might be hard to automate) and state-related error (not easy to define error here, though)
- collect any assertion, warning, error, crash etc. from server side application with full stacktraces and aggregation
Recently I've hit stability problem related to missing resources (leaking C++ programs). In order to track values of memory usage in a system dedicated probes have been written that collected many measurements every minute (with time included to correlate with environment events). Then we got the following data:
TIME playerd appman OneApplication datasync mmddf dtvservice pacman oci ondemandservice RCService pvrservice advertising localservices TOTAL
1002-1100 33128 12900 81484 16452 11164 15544 11020 21972 9976 9064 14948 11700 11588 260940
1002-1101 33160 12976 82088 17888 11228 15552 11024 21984 10368 9068 15632 11704 11608 264280
1002-1102 33172 12980 82100 17888 11720 15560 11024 21984 17604 9068 15780 11704 11608 272192
1002-1103 33172 12980 82100 34236 11804 15560 11024 21984 17608 9072 16200 11704 11608 289052
1002-1104 33172 12988 82108 44448 11860 15672 11024 21984 17608 9072 16764 11708 11864 300272
1002-1105 33172 12988 82112 44452 11860 15688 11024 23744 17608 9072 21584 11708 11864 306876
1002-1106 33172 12988 82252 32824 11860 15688 11024 22908 17608 9072 26876 11708 11864 299844
1002-1107 33176 12988 82252 32824 11860 15688 11024 22908 17608 9072 33020 11708 11864 305992
But such raw data is pretty hard to analyse. The first idea was to employ spreadsheet with it's plotting capabilities but it was very slow for huge amount of data (sometimes we review measurements from few days, samples every one minute). Then an answer comes to my mind: gnuplot.
AWK is small but very useful Unix scripting language that is mainly aimed at text files filtering and modification. If you're on embedded device you might expect bigger brothers (as Perl / Python) are not available, but AWK is usually shipped with busybox (= small).
One of missing functionalities is join() function (the opposite of splitting string by regular expression). One can implement it pretty easily, however:
function join(array, sep,
if (sep == "")
sep = " "
else if (sep == SUBSEP) # magic value
sep = ""
result = array
for (i = 2; i in array; i++)
result = result sep array[i]
split(s, arr, "|")
output = join(arr, "|")
As a result input string s will have the same contents as output. The function is useful for scripted modification of CSV files.
Guess who is the competitor for the FogBugz bug tracker:
But if you click on AdWords box you can see the following image (no server available):
Note the weird address: https://www.www.atlassian.com/software/jira/… It's the cause for invalid target link.
Of course it's a bug in configuration done by someone from Atlassian, but let's stop laughing and check what might be the final resolution for such type of problems:
- Google / AdWords: couldn't they just check (HTTP 200) any target address passed by theirs customer? It's a very simple change in the service
- Atlassian: automatic HTTP log scan won't be possible as web server hasn't been reached (DNS resolution phase), but 100% bounce rate should raise a warning
Anyway, dealing with such errors in non-systematic way (fixing just this error) is dangerous as further instances are not blocked. It's better to have an automatic process that helps with such errors exposure in the future.
We, at Aplikacja.info believe that bugs should be eliminated systematically i.e. every missed one should have proper effect in process change. The more "sensitive" process (more bugs exposed by the process) the smaller amount of bugs are left in the end. For example coding in PHP with massive refactorings is a trouble-maker (as the language has no static checking included and requires high level of test coverage to uncover every error). Any piece of internal check (only method names + parameter numbers done by lint-like tools) will help a lot there.