Dariusz on Software

Methods and Tools

About This Site

Software development stuff

Archive

Entries from October 2013.

Thu, 31 Oct 2013 11:35:53 +0000

postgresqlPostgreSQL is an OpenSource database server that is sometimes slower, but more powerful when compared to MySQL for example. For typical MySQL user complicated authentication system of PostgreSQL may be a bit confusing (they expect just login/password) but it has some advantages over password-based mechanisms: security. I'll show you how to setup ident-based authentication for your applications (authentication method that uses underlying operating system mechanisms).

Installation, assuming version 8.4 is available for your OS (may be different):

$ sudo apt-get install postgresql-8.4

Switch to postgres operating system account for database management, all the following commands are executed from postgresql account:

$ sudo su - postgres

First, create database as a namespace for your tables:

postgres:$ createdb mydatabase

Secondly, create application PostgreSQL user. We do not want to grant too much permissions to this user:

postgres:$ createuser myuser --no-superuser --no-createdb --no-createrole

Right now we have to grant access to newly created database access to PostgreSQL user (just access to single database):

postgres:$ psql mydatabase -c 'GRANT ALL PRIVILEGES ON DATABASE mydatabase TO myuser'

Finally we have to allow connections from operating system account darek to PostgreSQL account myuser:

/etc/postgresql/8.4/main/pg_ident.conf
 mymap  darek   myuser
/etc/postgresql/8.4/main/pg_hba.conf
 local   all         all                               ident map=mymap

In order the changes to have effect you have to reload PSQL configuration files:

postgres:$ /etc/init.d/postgresql reload

What is happening here requires a bit of explanation:

  • A mapping is created from local Linux user (darek) to PostgreSQL account (myuser) called "mymap"
  • Local connections by socket allow to authenticate user darek using mapping "mymap"

As a result "darek" can login as myuser from his account, let's test this:

$ psql mydatabase myuser -c "CREATE TABLE mytable(x int)"
 CREATE TABLE

As you probably have noticed no passwords were set. Authentication is done by operating system. This is much safer than storing passwords in configuration files of application.

Tags: debian.
Sun, 13 Oct 2013 06:34:24 +0000

A new site has been just born: RandomTest.net. Based on my experiences gained from many projects I've created set of libraries for different environments that allows to:

  • make random input to any application (web-based, smartphone, thick client-type, ...)
  • collect errors found
  • send them to central server
  • prepare useful reports for stability analysis

263

The rationale: Manual integration tests are expensive, unit developers tests are hard to implement properly and your latest click’n'play tool requires more and more maintenance effort for failing test cases along development?

What if we forget for a moment about scripting your UI by static scripts and replace it with totally random input? You will get for free coverage in almost whole application. “Wait” – you will say - “but there’s no way to check results as the input is random, is there?”.

Sure, here Design By Contract + Continuous Integration comes to play. You embed assertions in your system in many places. Failed assertions do not crash application, but are reported immediately to central server and aggregated into reports every day. Crashes, warnings, errors are reported, too. Then you can measure quality of your system-under-test by observing changes in number of errors, day by day. No scripting required to cover any new line of code! It will be tested automatically! The source is open, the project is present on github.com.

Tags: monitoring.
Fri, 11 Oct 2013 21:06:43 +0000

By searching for already existing implementations of random input-based testing attempts I've located the following material that describes possible level of implementation of that idea using web browser code tests:

Udacity splits possible implementation into the following levels, I've added interpretation in case if there's no browser alone under test, but the server side application:

  • verifying only HTTP protocol - HTTP errors 500, 404 will be handled on that level
  • HTML level checks - any problems with tags nesting, SGML/XML syntax can be caught there
  • rendering - can check any layout-related issue (overlapping divs)
  • forms and scripting - checks application logic using client-side language and local state (forms and cookies)

By testing on any of above levels you select application coverage vs level coverage. My idea is to connect random input on all above input levels with validation of every level contracts plus internal contracts check on server. Then we will have the following output:

  • caught any HTTP, HTML, rendering (I know it might be hard to automate) and state-related error (not easy to define error here, though)
  • collect any assertion, warning, error, crash etc. from server side application with full stacktraces and aggregation
Tags: quality, testing.
Wed, 02 Oct 2013 22:26:48 +0000

Recently I've hit stability problem related to missing resources (leaking C++ programs). In order to track values of memory usage in a system dedicated probes have been written that collected many measurements every minute (with time included to correlate with environment events). Then we got the following data: TIME playerd appman OneApplication datasync mmddf dtvservice pacman oci ondemandservice RCService pvrservice advertising localservices TOTAL 1002-1100 33128 12900 81484 16452 11164 15544 11020 21972 9976 9064 14948 11700 11588 260940 1002-1101 33160 12976 82088 17888 11228 15552 11024 21984 10368 9068 15632 11704 11608 264280 1002-1102 33172 12980 82100 17888 11720 15560 11024 21984 17604 9068 15780 11704 11608 272192 1002-1103 33172 12980 82100 34236 11804 15560 11024 21984 17608 9072 16200 11704 11608 289052 1002-1104 33172 12988 82108 44448 11860 15672 11024 21984 17608 9072 16764 11708 11864 300272 1002-1105 33172 12988 82112 44452 11860 15688 11024 23744 17608 9072 21584 11708 11864 306876 1002-1106 33172 12988 82252 32824 11860 15688 11024 22908 17608 9072 26876 11708 11864 299844 1002-1107 33176 12988 82252 32824 11860 15688 11024 22908 17608 9072 33020 11708 11864 305992 (...) But such raw data is pretty hard to analyse. The first idea was to employ spreadsheet with it's plotting capabilities but it was very slow for huge amount of data (sometimes we review measurements from few days, samples every one minute). Then an answer comes to my mind: gnuplot.

Gnuplot is an open source tool to draw any data using simple commands. It's very fast and can work as a batch process.

The script:

#!/bin/sh
export LC_ALL="C"
csv=$1DIR=`dirname $0`
{
echo "set terminal x11 font \",10\""
awk -f $DIR/plot-csv.awk $csv
} | gnuplot --persist

is pretty simple. It parses first line of data file and creates commands for gnuplot interpreter. The awk script does all the dirty details:

#!/bin/awk

FNR==1 {
if ($1 == "TIME") {
    timeMode = 1
    startColumn = 2
    print "set xdata time"
    print "set timefmt \"%m%d-%H%M\""
}
else {
    timeMode = 0
    startColumn = 1
}

F2 = FILENAME
gsub(/\//, "-", F2)
printf("set key samplen 1; set key horizontal left bottom outside autotitle columnhead; plot \"%s\" ", FILENAME)
for(n=startColumn;n<=NF;n++) {     if(n>startColumn){
        printf("\"\" ")
    }
    if (timeMode) {
        printf("using 1:%d ", n);
    }
    else {
        printf("using :%d ", n);
    }

    if ($n ~ /USER|SYSTEM|IDLE/) {
        printf(" lw .2 ");
    }
    else {
        printf(" with lines ");
    }

    if(n<NF){
        printf(",")}
    }
    print ";";
}

Result: leak in one of the processes is clearly visible from picture:

258

One picture instead of long data table will tell you much more on performance than numbers alone.

Tags: monitoring.

Tags

Created by Chronicle v3.5