[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnumed-devel] Lab import (data) workflow planning

From: Karsten Hilbert
Subject: Re: [Gnumed-devel] Lab import (data) workflow planning
Date: Sun, 03 Feb 2008 23:29:34 +0100

> Source files will contain messages that pertain to multiple patients  
> in one of (I think just) four scenarios:
> ... persons who already exist in GNUmed and are automagically  
> matchable i.e. according to configurable rules that define the  
> adequacy of a match as not needing user verification
Yes, see my other post.

> ... persons who already exist in GNUmed but whose matching requires  
> user assistance (or, at least, verification)
IOW where there's ambiguity as to what the match should be. Goes into 
clin.incoming_data_unmatched with a strong set of candidate matches.

> ... patients who do not yet exist in GNUmed but who are appropriate  
> to create from the data as a new patient)
Those go to clin.incoming_data_unmatched with an empty candidate set.

> ... patients who do not yet exist in GNUmed who the praxis may not  
> wish to create (e.g. information sent in error)
As do those.

> regarding this last use case, the praxis may choose to create the  
> person, even though the person did not receive care, to capture the  
> communication to the lab about their error
I agree. Those which the praxis decides to not create will go to 

> One thing I am wondering is whether the parser (Mirth or Hapi) will  
> be smart enough to evaluate and distribute, in a single pass through  
> the source file, every message according to rule-based decisions to  
> determine different places in the backend into which to write the  
> information, or whether all messages will need to be imported into a  
> table, each of whose rows would hold one message in raw data form.
If necessary we can always setup another staging table fronting

> All messages *could* be imported into the table
>       clin.incoming_data_unmatched
> where the auto-matchable records would be migrated out of the table.

> This would leave behind those for which an algorithm can suggest a  
> match, and one other class of message results, which would depend on  
> a user salvaging a match, or abandoning messages which could then be  
> moved over to
>       clin.incoming_data_unmatchable

> So here is another question... even if it is decided that Mirth or  
> Hapi could evaluate the matching rules, and --- for the well-enough- 
> matched records --- write the results into the clinical tables, we  
> would still end up with some of the message information going into  
> incoming_data_unmatched

> and into incoming_data_unmatchable,
But only after human intervention.

> and so we  
> would need a way for some of *that* data to be re-processed after the  
> identity of the patient had been confirmed or provided.

> So  
> essentially, we would need to use the output of a query on the user- 
> matched records as the input for a post hoc reprocessing of those  
> messages. So if processing will need to be done from these tables  
> anyway, is there value having a front-end channel to intercept part  
> of the data?
It might well make sense to have HAPI/Mirth import all data into
clin.incoming_data_unmatched and have a (GNUmed-side) demon go over
*that* and produce candidates moving unambigous matches into clinical
tables. Makes sense, yes. Would need to be triggered somehow, however.

Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten 
Browser-Versionen downloaden:

reply via email to

[Prev in Thread] Current Thread [Next in Thread]