gnumed-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gnumed-devel] Server replication


From: Busser, Jim
Subject: [Gnumed-devel] Server replication
Date: Wed, 4 Jul 2012 22:42:16 +0000

was: database replication with bucardo

On 2012-06-27, at 12:54 PM, Slappinjohn wrote:

>> That means you are saying "I will never need to access both
>> work and home at the same time". Good to know because it'll
>> ease up on the requirements.
>> 
> 
> this could change, because my wife is starting to do my writing stuff at
> home so parallel access could be possible, that's why I'll give bucardo
> a try (tested rubyrep before -- didn't got it workin')

The thread

        'database replication with bucardo'

has also got me thinking about vulnerability to failure, if a praxis becomes 
reliant on its EMR and especially the potential delays to

1) convert a backup machine to the primary machine, while the original primary 
is being repaired
2) setting up a "new" backup machine, to take the place of the "now-primary" 
backup machine

also because one's server -- on which the praxis may be depending -- may be 
serving more than just GNUmed. Here are two posts over at OSCAR about 
approaches, one to do with backups and one to do with Virtual Private Serving:

> Date: November 9, 2011 3:18:54 AM PST
> To: The OSCAR UserGroup list <address@hidden>
> Subject: Re: [Oscarmcmaster-bc-users] Virtual Cloud based Servers - for 
> OSCAR? for MYOSCAR?
> 
> I have a mirrored slave server that is replicating in real time On Site using 
> R-sync.
> I also have a back-up offsite storage that receives a copy of the entire 
> database nightly.
> 
> I have Sentinel programs in place to make sure each of these back-ups are 
> operational.
> 
> A script running on the "Slave" server that executes once an hour, and checks 
> the master/slave replication status. It's actually fairly smart, it connects 
> to the master DB (from the slave server), asks for the master status, which 
> gives it the binary logfile name and the logfile position.
> 
> It then does a "show slave status" on the slave, and compares. If the numbers 
> don't match, it sends an email to me.
> 
> I also receive a daily email form my remote back-up server indicating the 
> size of yesterday's back up file as well as the size of today's back up file.


> Date: November 14, 2011 12:35:11 PM PST
> 
> To revisit:
> 
> As you've no doubt realized, hardware can be a pain. It's scary trying to 
> find old RAID controllers, tape drives, and SCSI drives to replace failed 
> server hardware. In many cases "enterprise" means, "obscure and in 5 years I 
> will have a hard time finding replacement hardware for cheap because everyone 
> else is looking for the same obscure hardware to replace THEIR failed 
> equipment!".
> 
> Therefore I emphasize using commodity hardware such as SATA drives, and 
> virtualization so that the underlying hardware essentially doesn't matter, 
> and you treat it as expendable.
> 
> With off-site VPS hosting the one thing you'll need to come to grips with is 
> what happens when you lose your Internet or what happens when the VPS 
> provider screws up and breaks your server when doing an upgrade. There's 
> nothing you can do but let them the 3rd party fix it. Do you trust that they 
> have the resources to do that quickly? Are they cutting corners somewhere? 
> While it's a nice idea, I reckon that you need servers on-site anyway for 
> other network services like shared drives, you might as well consolidate and 
> make the entire on-site infrastructure more robust.
> 
> I'd say keep it on-site, have 2 servers of new hardware (cheap rack mounted 
> units are fine) running RAID1 and also in mirror with each other using 
> Linux's Distributed Replicated Block Device (DRBD) and virtualization, and 
> use the VPS services for off-site backups. Have another storage unit for 
> complete point-in-time snapshots of your virtual machines. I say to use DRBD 
> because the point of disaster avoidance is to minimize points of failure. A 
> lot of vendors are pushing SAN-type storage as a way to insure business 
> continuity, since when paired with solutions like VMware, you can get hot 
> failover between servers. However, you've now got 1 point of failure. 
> Complement this with a good support package and a monitoring tool like Zabbix 
> to watch over things and alert you when something is amiss.
> 
> Regards,




reply via email to

[Prev in Thread] Current Thread [Next in Thread]