Dariusz on Software Quality & Performance

07/08/2014

Automated RTP headers analysis

Filed under: en — Tags: , — dariusz.cieslak @

rtp1IPTV technology delivers video streams using fast, but not reliable protocols (UDP). Those connection-less protocols do not guarantee delivery nor retransmissions of missing packets. We have to accept low video quality for some networks or add another layer above basic protocol that allows to control completeness of delivery. For this purpose we use RTP (packet order, completeness) and RTCP (retransmission requests) protocols.

In this port I'm going to show how effectively use Linux command line tools to analyse client – server cooperation regarding retransmissions for complicated real-world example. Basic RTP metadata collected by tcpdump is used as an input.

(more…)

16/04/2014

Wavemon – monitor your WIFI connection quality

Filed under: en — Tags: — dariusz.cieslak @

The old truth that everyone who spends days on business trips: hotels generally sucks at local Internet delivery service. The least important service in hotel is pretty crucial if you depend on it to finish some work after business hours.

However, if you are on Linux/Ubuntu machine there's a nice tool that will allow you to evaluate WIFI signal quality. It's name is: wavemon. It's a console tool that shows current (and previous) signal strength.

335

Having real-time measurement you can decide what area of the hotel have the best signal strength.

29/11/2013

Check current network bandwidth used on a Linux/Unix box

Filed under: en — Tags: , — dariusz.cieslak @

Sometimes you need to quickly measure current bandwidth used by your Linux box and don't have dedicated command installed. You can use standard /proc/ file entries to get that info from the system.

Example of a embedded device with a TS stream as an input:

( cat /proc/net/dev; sleep 1; cat /proc/net/dev; ) | awk '/eth0/ { b=$1; sub(/eth0:/,"",b); if(bp){ print (b-bp)/1024/1024, "MB/s" }; bp=b }'
1.00053 MB/s

23/05/2013

Server flood automatic detection

Filed under: en — Tags: , — dariusz.cieslak @

My current customer develops embedded devices used by many end users. In order to save server load devices use multicasts for downloading data: every device registers itself on multicast channel using IGMP and listens to UDP packets. No connections to be managed results in lower overhead.

However, some data (or some requests) cannot be downloaded from multicasts channels and HTTP/HTTPS must be used to interact with server. As the number of devices is very high special methods have been used in order not to overload backend servers (randomised delays, client software optimization).

Consequently, small bug in client software that will trigger more events than usual can be very dangerous to whole system stability (because the effect of thousands of devices – perfect DDOS may kill back-end). In order to catch such errant behaviour as soon as possible I've developed daily report that controls server usage in my customer office.

First of all, we need to locate the most "interesting" device by IP address from logs (we list top 20 IPs based on server usage):

    ssh $server "cat /path-to-logs/localhost_access_log.$date.log" | awk '
        {
            t[$1 " " $7]++
            ip[$1]++
        }
        END {
            for(a in t) { print t[a], a }
            max = 0
            max_ip = ""
            for(a in ip) { if(ip[a] > max) { max=ip[a]; max_ip=a; } }
            print max_ip > "/tmp/max_ip"
        }
    ' | sort -n | tail -20
    IP="`cat /tmp/max_ip`"

Then selected IP will be examined hour-by-hour to locate patterns in behavior:

(more…)

Google.com performance analysis over few years

Filed under: en — Tags: , , , — dariusz.cieslak @

Time is money. No doubt about that. The more time your customer waits for your service response the less coins may hit your pocket. You may ask: why? If your page loading is too slow your "almost" customer hits back in his browser and selects another link from sponsored links Google provides him. You loose this customer in favour of another supplier.

What about Google? Was their service (no kidding, it is a sevrice that searches information, paid by ads) always so blazingly fast? Let's check this site performance monitored from London server over few years:

As we can see there were some small problems in 2009 (~5% of requests required more than 1 second to be processed). Not so bad.

(more…)

Older Posts »

Powered by WordPress