gnumed-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnumed-devel] Approaches to maintain clinical data uptime


From: Karsten Hilbert
Subject: Re: [Gnumed-devel] Approaches to maintain clinical data uptime
Date: Wed, 3 May 2006 14:59:49 +0200
User-agent: Mutt/1.5.11+cvs20060403

On Wed, May 03, 2006 at 01:04:08AM -0700, Jim Busser wrote:

> Speaking of write-ahead logs, would these be standard in GNUmed -- as  
> currently deployed under Postgres --
Yes. GNUmed cannot (and should not) do anything about PG using WAL.

> and would these WALs be easily  
> (or only with great difficulty) usable to recover data that had not  
> made it into the secondary (backup) server, between the time of any  
> last primary database backup (dump) and the time of a primary server  
> crash?
Easily no. With great difficulty no. There's PITR for that
(point in time recovery - wal log shipping). Not sure off
the top off my head where the advantage over direct
replication (Slony) is.

> - is it recommended or required that the database to be fully logged off
Neither. It simply does not matter.

>, so that it is in an unambiguous or "clean" state (no unprocessed  
> transactions) at the time of the dump?
If PG would allow a dump to contain "unprocessed
transactions" we would not consider it for driving our
backend.

Now, as for "clean" - PG can only enforce *physical*
transaction boundaries, eg what the application puts inside
a transaction is written to disc safely or it is not written
at all. No half-things about this. What PG cannot enforce,
however, is that the *content* of a transaction is sane in a
business sense. Gross example:

Application writes

begin;
insert into diagnosis ("total crap, and wrong as well");
commit;

Now PG will take the utmost care to dutifully write total
crap to disc. But that's what the application asked for.

IOW we as programmers need to make sure that the content of
our transactions always leaves behind a meaningful medical
record.

> - can Syan or Karsten, if the work involved is limited, attempt a  
> load test of writing the dump file, so we might know how long this  
> would require and what its size might be...
This is entirely dependant on the database content. The dump
file may grow huge, indeed (several GB, eventually) but
could be fed into compression right away or streamed onto
tape or whatever.

> this is pertinent to  
> office procedures and downtime
No it's not. That's what PG is MVCC for - and that's why we
*chose* PG in the first place.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346




reply via email to

[Prev in Thread] Current Thread [Next in Thread]