[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Gnumed-devel] experiments with gnumed - multiusers vnc, importing a
Re: [Gnumed-devel] experiments with gnumed - multiusers vnc, importing an au emr
Wed, 19 Apr 2006 09:52:58 +1000
I need to de-identify it ; options include nx server, dvd sent by mail,
publish somewhere with 50 mb spare,
BTW - lately I've been playing around with suspend2 in order to get
linux to boot up quickly, and the latest suspend2 somehow disables DMA for
disk which slows down disk access by a factor of 10.
The solution is to compile a kernel with a built-in rather than modular
specific ide driver for the particular chipset of the computer's motherboard,
and use this kernel image for the initrd and kernel in /boot/grub/menu.lst ,
and it seems gnumed is working remarkably well now.
I can publish a howto about it in the wiki if you like.
BTW - besides the names (of patients and doctors), it's a good idea to remap
the suburbs and the street numbers , and possibly the street names too . Any
other things should be remapped to make it a good deidentification ?
The downside is that the big records get lost in the names; maybe there should
be a custom function that calculates the size of the emr record, and can be
accessed by identity id, and this could come up in the demographic search
On Wed, 19 Apr 2006 02:57 am, Karsten Hilbert wrote:
> On Sat, Apr 01, 2006 at 04:04:39PM +0800, Syan Tan wrote:
> > one can set up multiple thin client sessions between two computers
> > running debian
> > it doesn't matter if any-doc is used as the gnumed user each time.
> Exactly, that's by design.
> > now if you are in au and use a fairly common emr, you can use gnumed
> > to browse the patient demographics and progress notes.
> > the importers scripts are in client/importers/au/md2/a ,
> > use dbf_2_pg first to create a postgresql database from the dbf /dbt
> > files, and then use the script in gnumed_import to transfer the
> > patients.dbf , the doctors.dbf and
> > progress.dbf/ dbt into a gnumed_v2 database,
> Exciting !
> Syan, can I somehow get access to a copy of reasonable-size sample data ?