|Subject:||Re: [Gnumed-devel] ? top down faster for emr browser|
|Date:||Tue, 02 May 2006 13:16:37 +0800|
I think it's spent doing the processing, as I'm using python os.system ('openssl enc -aes128 -pass file:pass.txt')
etc .. and starting a new process and new input and output streams for each word marked as
requiring encryption after parsing the narrative. The narrative parsing isn't too fast either,
as I remember when just replacing marked words with 'xxxx' .
I think I'm also doing a sql lookup on the word / marked table for every word parsed in the narrative to see if it is marked.
The updates are written into a file
as a batch update of 101 rows , and then another system process of "psql -f batchxxx "is run at each iteration
of 101 rows. I am only using one connection.
On Mon May 1 0:07 , Karsten Hilbert sent:
> narratives in a duplicate database. ( this
> takes a long time, about 12 hours or more for 360,000 rows).
Thanks for hammering the schema. Is most of this time spent
reading/updating the database ? Does it use a lot of new
|[Prev in Thread]||Current Thread||[Next in Thread]|