gnumed-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnumed-devel] speeding up GNUmed


From: Karsten Hilbert
Subject: Re: [Gnumed-devel] speeding up GNUmed
Date: Wed, 07 Apr 2010 11:32:25 +0200

> > - patient with 190 encounters, 36 documents and 56 pages therein
> > 
> > While this laptop was both bzip2ing up a 7 GB backup and
> > rsyncing another one to a second machine it took 30 seconds
> > to populate the document tree.
> > 
> > A few document load SQL tweaks later it took well under one
> > second under the same load.
> 
> So the slowness resulted from the SQL ? How did you go about optimizing ?

As it were we first retrieved the pertinent document IDs (PKs) and
then went back to the database for each document retrieving the data
during instantiation of the corresponding business object.

The business objects, however, can also be instantiated from a complete
row rather than from a primary key.

So I switched from loading the IDs to loading the full data right
away and creating objects from pre-retrieved data. That meant only
*one* roundtrip to the database rather than one + as many as there
are documents.

Credit where credit is due: this sort of approach was initially
suggested by Ian back in the days (following which I implemented
the instantiate-from-row-data method) and has been applied successfully
in several cases where loading data proved slow.

> I  would have thought that unless the SQL included retrieving overhead there
> is little to speed up the query itself.

Oh, there could also have been missing indexes or unfortunate joins.

Neither was the case.

Karsten

-- 
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01




reply via email to

[Prev in Thread] Current Thread [Next in Thread]