gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] self heal option


From: Michael Fincham
Subject: Re: [Gluster-devel] self heal option
Date: Tue, 11 Sep 2007 12:27:48 +1200

Hi August, list,

I believe self heal is only invoked once a file is opened after the AFR
has been split.

E.g, take a copy of the file and it will also self heal.

-Michael

On Mon, 2007-09-10 at 20:04 -0400, August R. Wohlt wrote:
> Hi folks,
> 
> I notice that a few people have posted spec files with self-heal in
> cluster/afr as well as in cluster/unify. I can't seem to demonstrate
> this working, however. Is there a way to get healing functionality
> with straigh afr?
> 
> Take a simple 2 machine afr mirror for example (spec file below). I
> have a brick on machine A doing afr to the brick on machine B and vice
> versa. If one of the bricks goes down and I create a file on the
> remaining one, is there any way to have healing done with the other
> brick comes back up?
> 
> In my testing, I don't see any evidence of self-heal in afr even
> though several folks have it in their spec files. When I bring the
> other machine back online, files that were create while it was down
> are never created on the mirror brick.
> 
> Can someone with a more intimate knowledge of the code comment on this
> ? thanks...
> 
> Server(s) :
> 
> volume brick-ds
>     type storage/posix
>     option directory /.brick-ds
> end-volume
> 
> volume server
>     type protocol/server
>     option transport-type tcp/server
>     option bind-address 192.168.16.1
>     subvolumes brick-ds
>     option auth.ip.brick-ds.allow 192.168.16.*
> end-volume
> 
> Client(s):
> 
>    volume brick-ds-local
>      type protocol/client
>      option transport-timeout 4
>      option transport-type tcp/client
>      option remote-host 192.168.16.1
>      option remote-subvolume brick-ds
>    end-volume
> 
>    volume brick-ds-remote
>       type protocol/client
>       option transport-timeout 4
>       option transport-type tcp/client
>       option remote-host 192.168.16.128
>       option remote-subvolume brick-ds
>     end-volume
> 
>     volume brick-ds-afr
>       type cluster/afr
>       subvolumes brick-ds-local brick-ds-remote
>       option self-heal yes
>       option replicate *:2
>     end-volume
> 
> 
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
-- 
-Michael Fincham <address@hidden>
Unleash Technology Solutions





reply via email to

[Prev in Thread] Current Thread [Next in Thread]