gnumed-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnumed-devel] some 1st cut proposed optimizations to emrbrowser


From: Syan Tan
Subject: Re: [Gnumed-devel] some 1st cut proposed optimizations to emrbrowser
Date: Sat, 27 May 2006 19:44:20 +0800



one problem is that I think if a slow narrative is initiated in one patient, and before it returns

another patient is selected, garbage collection may prevent the next patient being loaded

until the thread completes , because the thread has a reference to the gmClinicalRecord object

being garbage collected ( I think ).  I tried making the later part of the threaded function

keep a weakreference of self ( gmClinicalRecord object)  and then checking the weakref  returns

self or none  before updating self with the retrieved rows, and this seems to

keep things flowing.  I'd swear that gnumed now is faster then the emr at work for browsing

a series of patients with really thick medical histories , because gnumed doesn't block ,

(well it won't once the emr journal is also fixed) !  



On Sat May 27 12:11 , Syan Tan sent:

I got a bit tired of waiting for the narratives to update so took a look into emr browser and

was able to get multi threaded narrative bulk retrieval working.

When a user selects a narrative node,

a check is made to see if the cache for narratives is empty,

if it is a fast query to get the narrative the user selected.

after that , a thread is started off to get the bulk of the narratives for the patient;

whenever the user is browsing another narrative, if the

cache is still not updated, a fast query is done. The thread is only started off once, because

there is reference to the thread object.

When the thread gets the narratives in bulk for a patient, it

constructs the narrative objects, and then with a synchronized access to the cache,

sets the cache reference for the narratives.

further user browsing would then get from the cache instead of doing a fast query.

It could be argued that there is no need to have threading, as the user could

get only the narratives he views with the fast query.

the fast query is a modification of v_pat_narratives, which uses v_pat_items , and v_pat_episodes,

so that the same data is returned in a similiar view with similiar names except that on

clin.episodes and clin.clin_narrative tables are used.

This avoids sequential scanning of all child tables of clin_root_items, which despite experimenting

with postgresql.conf , I was unable to turn off, and makes v_pat_narratives not useful for fast querying

on pk_episode keys.

In order to get multithreaded ro connection to work, the loginInfo for the 'historica' service has

to be retrieved from ConnectionPool singleton object, and then gmPG.dbapi is used to create a new

connection and cursor, which can then be passed into gmPG.run_ro_query as the first parameter.

This allows the slow bulk select to run simultaneously with the intermittent fast selects.

I encourage anyone interested in emr design to take a look at gnumed , as it has some very educational

areas in it.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]