[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Re; Bug #21918

From: Gareth Bult
Subject: Re: [Gluster-devel] Re; Bug #21918
Date: Thu, 31 Jan 2008 13:43:47 +0000 (GMT)

Well, I must admit I have the same problem, my initial testing on this was 
flawed and I jumped the gun a little. 

xen block-detach and xen block-attach "should" do this, but there is a problem, 
which I'm "hoping" is attached to the "file:" driver. 
At the moment I'm working on a 2.6.21 kernel which should support AIO on 
gluster (?!) and my next target is to see whether this driver performs better 
then "file:" with regards to xen detatch and attach. 

If this does work, it should be possible to script drive removal / re-add from 
the Dom0 after you restart glusterfsd .... 


----- Original Message ----- 
step 3.: "Jonathan Galentine" <address@hidden> 
To: "Gareth Bult" <address@hidden>, address@hidden 
Sent: 31 January 2008 13:22:01 o'clock (GMT) Europe/London 
Subject: Re: [Gluster-devel] Re; Bug #21918 

I tried this, how did you handle the case when a node fails? You can't remount 
the glusterfs client partition when the node comes back up, because it is 
marked in use/busy and umount -f does not seem to work (it has a file 'open', 
but you receive a transport error when trying to access it or the mount.) Does 
the gluster client have a remount option? 

On Jan 29, 2008 4:27 AM, Gareth Bult < address@hidden > wrote: 


Many thanks .. fyi; I found a way around the self-heal issue for XEN users, 
this also leads to a huge performance boost. 

I'm running two gluster filesystems (no self heal on either) then running 
software raid "inside" the DomU across file images, one on each system. 
Read-thruput on the DomU is 95%+ of the speed of the local disk. 

(and self-heal is performed by software raid rather than gluster) 


----- Original Message ----- 
step 3.: "Anand Avati" < address@hidden > 
To: "Gareth Bult" < address@hidden > 
Cc: "gluster-devel Glister Devel List" < address@hidden > 
Sent: 29 January 2008 03:02:00 o'clock (GMT) Europe/London 
Subject: Re: [Gluster-devel] Re; Bug #21918 

this will be addressed in the next afr commit, self heal is being worked on 
AFR. We are even working on self healing files with holes, but that might be a 
week beyond. 


2008/1/28, Gareth Bult < address@hidden >: 


There's one issue that's stopping me going production atm, which is documented 
in #21918. 

Any news on this .. it does seem fairly "critical" to Gluster being usable ... 

Gluster-devel mailing list 

If I traveled to the end of the rainbow 
As Dame Fortune did intend, 
Murphy would be there to tell me 
The pot's at the other end. 
Gluster-devel mailing list 

reply via email to

[Prev in Thread] Current Thread [Next in Thread]