|Subject:||Re: [Gnumed-devel] some 1st cut proposed optimizations to emrbrowser|
|Date:||Sat, 27 May 2006 21:12:47 +0800|
on another list, I mentioned that I thought it doesn't matter whether the backend is a networked flat file system database
like DBase , or a sophisticated column clustering , AI tree prune searching database like Postgres, I thought that when
it came to retrieving very large medical records for user browsing and editing, performance would be greatly affected
by the access pattern of the client program . I proposed that the ability separate retrieval of title/summary data vs .
large blocks of text or binary data was one important consideration, and from the point of usability, an emr viewer
should be opened quickly , without blocking waiting for large amounts of data to arrive from the server - non -blocking
retrieval of the medical record. Karsten argues that incomplete display of data may not be desirable.
The changes are a compromise :- the user only sees one encounter narrative set at a time when using the emrBrowser, so why
not retrieve each narrative set as it is browsed ? However, at some stage , it might be useful for the client to have
all the emr record cached client side, e.g. for faster browser operation, so why not set off a separate thread which does
the original bulk retrieval of the narratives, and when it completes, load the client side cache , if the calling clinical record
object is still existing (i.e. the user hasn't changed to look at another patient in the meanwhile).
It 's just a bit of fun to make gnumed more sophisticated , but really, just having one encounter retrieval at a time
for emr browsing is enough, and this isn't multithreaded.
With respect to the display of the emr journal, this can be made scalable by retrieving only the parts of the journal text
that fill up the viewable text area, and listening in on the scrollbar ; or you could do it much like a GIF image update,
where you first update the text with the chronology of encounters, and the number and type of narratives,
and gradually fill up the narratives with the text , depending on where the scrollbar position is with respect to the positions of the
encounters in the skeleton text . Well, that's one hair brained idea anyway.
On Sat May 27 14:34 , Sebastian Hilbert sent:
On Saturday 27 May 2006 13:44, Syan Tan wrote:
I don't understand half of what you are talking about. But I appreciate your
work and reporting.
@all. Is there a way to provide GNUmed with a huge set of patient data.
I am keen on speed tests. I know there is a problem with real patient data.
> one problem is that I think if a slow narrative is initiated in one
> patient, and before it returns another patient is selected, garbage
> collection may prevent the next patient being loaded until the thread
> completes , because the thread has a reference to the gmClinicalRecord
> object being garbage collected ( I think ). I tried making the later part
> of the threaded function keep a weakreference of self ( gmClinicalRecord
> object) and then checking the weakref returns self or none before
> updating self with the retrieved rows, and this seems to keep things
> flowing. I'd swear that gnumed now is faster then the emr at work for
> browsing a series of patients with really thick medical histories , because
> gnumed doesn't block , (well it won't once the emr journal is also fixed) !
> On Sat May 27 12:11 , Syan Tan sent:
> I got a bit tired of waiting for the narratives to update so took a look
> into emr browser and was able to get multi threaded narrative bulk
> retrieval working.
> When a user selects a narrative node,
> a check is made to see if the cache for narratives is empty,
> if it is a fast query to get the narrative the user selected.
> after that , a thread is started off to get the bulk of the narratives for
> the patient; whenever the user is browsing another narrative, if the
> cache is still not updated, a fast query is done. The thread is only
> started off once, because there is reference to the thread object.
> When the thread gets the narratives in bulk for a patient, it
> constructs the narrative objects, and then with a synchronized access to
> the cache, sets the cache reference for the narratives.
> further user browsing would then get from the cache instead of doing a fast
> query. It could be argued that there is no need to have threading, as the
> user could get only the narratives he views with the fast query.
> the fast query is a modification of v_pat_narratives, which uses
> v_pat_items , and v_pat_episodes, so that the same data is returned in a
> similiar view with similiar names except that on clin.episodes and
> clin.clin_narrative tables are used.
> This avoids sequential scanning of all child tables of clin_root_items,
> which despite experimenting with postgresql.conf , I was unable to turn
> off, and makes v_pat_narratives not useful for fast querying on pk_episode
> In order to get multithreaded ro connection to work, the loginInfo for the
> 'historica' service has to be retrieved from ConnectionPool singleton
> object, and then gmPG.dbapi is used to create a new connection and cursor,
> which can then be passed into gmPG.run_ro_query as the first parameter.
> This allows the slow bulk select to run simultaneously with the
> intermittent fast selects. I encourage anyone interested in emr design to
> take a look at gnumed , as it has some very educational areas in it.
Leipzig / Germany
[www.gnumed.de] -> PGP welcome, HTML ->/dev/null
Faire Angebote beim Internetshoppen gibt es in meinem Onlineshop
Gnumed-devel mailing list
|[Prev in Thread]||Current Thread||[Next in Thread]|