Skip to content

Entries from April 2010.

How To Debug JNDI Lookup Problems

Sometimes when you get an exception like this:

javax.naming.NameNotFoundException: Name "comp/UserTransaction" not found in context "java:"

you want to see what entries are visible in JNDI. No problem, place this code somewhere near lookup problem code location:

InitialContext ic = new InitialContext();
NamingEnumeration it = ic.list("java:comp");
System.out.println("JNDI entries:");
while (it.hasMore()) {
    NameClassPair nc =;
    System.out.println("JNDI entry: " + nc.getName());

You will see all JNDI names availailable on console then (you can use your logging library instead of course).

"Secure Connection Failed" on

Oops! Someone forgot to renew a SSL certificate :-)

Websphere Extreme Scale + Hibernate = ?

Recently I was asked to integrate WXS (Websphere Extreme Scale, commercial cache implementation from IBM) into existing WPS (Websphere Process Server)-based project to implement read-only non-distributed cache (one independent cache instance per JVM). The idea is to plug cache implementation using 2nd level cache interfaces defined by Hibernate.

I plugged in the objectgrid.jar into hibernate configuration:

<property name="cache.provider_class">

then saw the following exception during Hibernate initialisation:

[4/29/10 10:08:34:462 CEST] 00000069 RegisteredSyn E WTRN0074E:
Exception caught from after_completion synchronization operation:
org/hibernate/cache/CacheException.<init>(Ljava/lang/Throwable; )V
at org.hibernate.cache.UpdateTimestampsCache.invalidate(

WXS expects CacheException(java.lang.Throwable) constructor, Hibernate API supports this constructor since version 3.2. Version 3.1 currently used in project doesn't support this constructor (only CacheException(java.lang.Exception) is present). This issue forced Hibernate upgrade in the project. Note that no Hibernate version requirements found on official "requirements" page (huh?).


  • BIG BLUE is not a guarantee for high quality documentation
  • Closed source is harder to implement (any problems put you in waiting support queue)
  • "Big" != Good (IMHO), would you pay for read-only cache implementation $14,400.00 per PVU and deploy giant objectgrid.jar (15M) for this very purpose? Yikes!

I'm going to talk with project sponsor about better ways to spend money (my salary increase, maybe ;-) ). Seriously: free EHCache / OSCache are more than sufficient for the task. They are small and Open Source.

"Do not hire Godzilla to pull Santa's sleigh, reindeers are better for this purpose" - Bart Simpson.

SSL Certificate for Lighttpd HOWTO

When your customer enters your website they do not want to make their passwords / credit card information to be visible for everyone (sniffing local network or one of routers in the way). That's why SSL (Secure Socket Layer) was born. Is simple words it wraps HTTP connection in a secure tunnel.

Another story is man-in-the-middle attack possibility or faking DNS servers response. You (as customer opening the webpage) should ensure that you are connecting to website you intended to (fake bank websites are big risk for your money, so it's important). That's why certification is closely bundled with connection encryption.

I'll show you how obtain and install SSL certificate under Lighttpd web server to make your website more trustworthy for your customers.

Fitrst, create directory structure that will make organisation easier:

# mkdir -p /etc/lighttpd/ssl/
# cd /etc/lighttpd/ssl/

Create server key (you will be prompted for a password) and CSR (Certificate Signing Request) that will be used for certification creation in one step:

# openssl req -newkey rsa:2048 -keyout -out

Remove attached password (I do not want to have to pass the password on server restart):

# openssl rsa -in -out

Then, pass generated to your SSL certificate provider. You will have to prove you own the domain (an email will be sent to with special URL). After succesfull verification certificate is created. Place (paste) this certificate inside /etc/lighttpd/ssl/ file.

Then you have to create pem file (not sure why it's organised that way):

# cat >

Then you have to tell Lighttpd to handle SSL traffic for given IP address and port:

$SERVER["socket"] == "IP-ADDRESS-HERE:443" {
    ssl.engine = "enable"
    ssl.pemfile = "/etc/lighttpd/ssl/" = "/etc/lighttpd/ssl/"

First note: for SSL traffic you have to specify IP address, not domain name. SSL handshake is done BEFORE headers are sent to server, so name based virtual hosts are not possible (certificates must be checked first).

Second note: if you use the same domain for HTTP and HTTPS traffic don't have to specify server.document-root and other domain-related parameters. They will be borrowed from:

$HTTP["host"] = "" {

(plain HTTP) section.

Now browser redirected to should show you your web-application without warnings.

Happy SSL-ing!

Definiowanie usług w

System rezerwacji pozwala rezerwować wizyty pacjentów u specjalistów poprzez Internet. Dotychczas w pierwszym kroku pacjent wybierał specjalistę by móc zarezerwować wizytę. Aby ułatwić wybór w przypadku wielu specjalistów system rezerwacji został rozszerzony o możliwość wyboru usług.

  • Usługi mogą mieć różny czas trwania (i taki czas będzie rezerwowany w kalendarzu specjalisty)
  • Można przypisać usługi do różnych specjalistów

Uruchomienie mechanizmu jest bardzo proste: po pierwsze definiujemy usługi i przypisujemy je do wybranych lekarzy (zakładka "Usługi").

Od tego momentu pacjent w pierwszej kolejności wskazuje usługę którą jest zainteresowany:

Następnie wybiera specjalistę realizującego szukaną usługę (widzi opis i najbliższy dostępny termin):

Dalej procedura rejestracji postępuje jak dotychczas (z SMS-owym hasłem).

How To Migrate Django To Different Database Backend

Changing database location is simple - just launch dump on source database server, import it into destination database, redirect domain and voila! You can use this method to migrate your database into newer database engine version. But what can you do if you realize whole backend must be changed (i.e. from MySQL to PostgreSQL)?

Migrating SQL dump to different database dialect is not very easy (column types / dates formats as first examples come to mind). But you don't have to operate on SQL dumps. The simple answer here is: "dumpdata".

Django uses special script to manage typical operations like: initialisation of database, preloading data, dropping database etc. The command: dumpdata appname

prints on stdout all data contained in appname in universal Json format. Then you can load dump just created by using: sqlreset gabinet | psql ... loaddata filename.json

Database state must be reset before import. That's why sqlreset is used. sqlreset alone prints DROP DATABASE statements on stdout allows to purge database from tables (if passed to SQL execution tool).

Additionally you can gzip JSON data created to make migration (much) faster: dumpdata appname | gzip -c | ssh destinationserv 'cat > data.json.gz'
(login to destinationserv ...) sqlreset appname | psql ...
gzip -dc data.json.gz | loaddata -

Happy migrating!

Why svn:mime-type does matter?

You probably already know that Subversion stores some kind of metadata for all files added to repository. It's called "properties" in Subversion vocabulary. This key-value map is responsible for registering ingored files masks, file attributes, internal file content type etc.

The property I'm going to present today is "mime-type". It describes file content in similar way to HTTP header "Content-type" telling svn client how to handle the file. Typical values are: "text/plain", "application/octet-stream". Especially first part of mime-type is important:

  • text/*: line-by-line merges are used, diffs are generated
  • any other prefix: no text merges prepared

If you do not set this properly right you may end up with messed binary file (end-of-line conversions) or non-mergable changes in text file (that is marked as binary by mistake).

Of course during adding a file to workspace one can forget to set those properties correctly. Here auto-props comes with help. Auto-props are applied when "svn add" command (from command line or from GUI) is issued. Configuration is placed inside "~/.subversion/config" file. Here's my config fragment from one of projects.

*.csv = svn:mime-type=text/plain
*.java = svn:mime-type=text/plain
*.sql = svn:mime-type=text/plain
*.sql = svn:keywords=Author Date Id Revision URL
*.jar = svn:mime-type=application/octet-stream

Besides mime-type svn:keywords is being set in the example. It controls which keywords are expanded in source files.

Linux Command-line Toolset: xclip

In order to use modern Linux distributions you don't have to look at console window. All important system properties are configurable by GUI tools. There are times, however, a small bit of scripting becomes very useful. In order to easilly connect GUI and console world you have to pass data between. A small tool: xclip helps in this task.

It's very easy to install the tool under Debian-based systems:

apt-get install xclip

Collect huge selection into a file (if pasting into terminal may be very slow):

xclip -o > file_name.txt

Download selected URL into /tmp/file.out:

URL=`xclip -o`; test -z "$URL" || wget -O /tmp/file.out "$URL"

Place file contents into clipboard:

xclip -i < file_path

As you can see xclip is a handy tool built with unix-style elegance in mind. Many other applications could be discovered for this tool.

Recreate Derby Database Under WebSphere

WebSphere uses SQL databases for internal managment of MQ queues (Derby database engine under the covers). Sometimes you need to reset their state. Here's the script that erase and recreate BPEDB database state (tested under WS 6.1.2):

rm -rf $WID_HOME/pf/wps/databases/BPEDB
echo "CONNECT 'jdbc:derby:$WID_HOME/pf/wps/databases/BPEDB;create=true' AS BPEDB;"|\
    $WID_HOME/runtimes/bi_v61/derby/bin/embedded/ /dev/stdin

Second Level Cache For SQL Queries Under Hibernate

Second level cache in Hibernate allows to greatly speed-up your application by minimizing number of SQL queries issued and serving some results from in-memory cache (with optional disk storage or distributed cache). You have option to plug in different cache libraries (or to Bring Your Own Cache - an approach popular among my colleagues from India ;-) ). There are caching limits you must be aware of when implementing 2nd level cache.

ID-based entity lookup cache

Caching entity-based lookup by id is very straightforward:

    <property name="cache.provider_class">

<class name="EntityClass" table="ENTITY_TABLE">
   <cache usage="read-only" />

Since then selecting EntityClass by id will use cache (BTW. remember to set cache expiration policy if entity is mutable!). But querying but other entity attributes will not cache the results.

Query cache

Here so-called "query cache" jumps in:

    <property name="cache.use_query_cache">true</property>

Query query = new Query(EntityClass.class);

and voila! Queries issued (with parameters) are indices for cached results. The very important note is the value stored in cache. Entity keys and types are stored. Why is it important? Because it complicates caching SQL queries.

Caching SQL queries

Query cache requires query result to be Hibernate-known entity because the reason I mentioned above. That disallow to cache the following construct:

<!-- Map the resultset on a map. -->
<class name="com.comapny.MapDTO" entity-name="EntityMap">
<id name="entityKey" type="java.math.BigDecimal" column="ID" length="38"
access="" />
<property name="achLimit" type="java.lang.String" length="256" access="" />

<!-- alias is used in query -->
<resultset name="EntityMapList">
<return alias="list" entity-name="EntityMap" />

<sql-query name="NamedQueryName" resultset-ref="EntityMapList">
SELECT (...) AS {list.entityKey}

We will get the error:

Error: could not load an entity: EntityMap#1028
ORA-00942: table or view does not exist

EntityMap cannot be loaded separately by Hibernate because it's a DTO (data transfer object), not the entity. How to fix this error? Just change result of named query from DTO to corresponding entity. Then entity retrieved will be cached properly by Hibernate. a migration to UK is a web monitoring service that continuosly measures response time of a server.

Random network instability visible during last days forced me to make a fast migration S1 to new server. Linode slice deployed in UK was choosen as default platform for S1 monitoring station. Their networks and servers proved to be very stable in past few years.

I hope new infrastructure will make better measurements and lower false notifications ratio.

Rezerwacje zmiennej długości w to platforma pozwalająca zintegrować z istniejącą stroną przychodni możliwość rezerwacji wizyt online.

Jeden z moich klientów potrzebował udostępnić swoim pacjentom rezerwacje zmiennej długości (zależnej od rodzaju usługi). Zastosował on tymczasowe (przyznaję, sprytne) rozwiązanie polegające na rezerwacji dwóch kolejnych terminów dla dłuższej wizyty. Było ono jednak dość niewygodne dla pacjenta.

Dziś system został rozszerzony o możliwość definiowania usług (wraz z czasem), dzięki czemu wybór specjalisty i usługi jest wygodny. Zapraszam do zapoznania się z aktualną wersją systemu.