gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Selfheal is not working? Once more


From: Łukasz Osipiuk
Subject: Re: [Gluster-devel] Selfheal is not working? Once more
Date: Thu, 31 Jul 2008 08:33:31 +0200

2008/7/31 Raghavendra G <address@hidden>:
> Hi,
>
> Can you do a _find . | xargs touch_ and check whether brick A is
> self-healed?

Strange thing. After a night all files appeared on brick A
but empty and with creation date jan 1 1970 and without any extended attributes.
Maybe slocate deamon touched them?


After another one shutdown/delete/startup/find .| xargs touch
It worked. Thanks a lot :)


I realized that previously I was doing "access on client phase" to
soon, yet befor client established TCP
connection with new brick A daemon. Now it brick self healed :)

There is still minor (?) issue with directories. They reclaimed
extended attributes but creation date displayed by ls is Jan 1 1970
after self heal (both on brick A and on client).
Is this known bug/feature?

Regards, Łukasz



> regards,
>
> On Thu, Jul 31, 2008 at 4:07 AM, Łukasz Osipiuk <address@hidden> wrote:
>>
>> Thanks for answers :)
>>
>> On Wed, Jul 30, 2008 at 8:52 PM, Martin Fick <address@hidden> wrote:
>> > --- On Wed, 7/30/08, Łukasz Osipiuk <address@hidden> wrote:
>> >
>>
>> [cut]
>>
>> >> The more extreme example is: on of data bricks explodes and
>> >> You replace it with new one, configured as one which gone off
>> >> but with empty HD. This is the same as above
>> >> experiment but all data is gone, not just one file.
>> >
>> > AFR should actually handle this case fine.  When you install
>> > a new brick and it is empty, there will be no metadata for
>> > any files or directories on it so it will self(lazy) heal.
>> > The problem that you described above occurs because you have
>> > metadata saying that your files (directory actually) is
>> > up to date, but the directory is not since it was modified
>> > manually under the hood.  AFR cannot detect this (yet), it
>> > trusts its metadata.
>>
>> Well, either I am doing something terribly wrong or it does not handle
>> this case fine.
>> I have following configuration.
>> 6 bricks: A, B, C, D, E, F
>> On client I do
>> IO-CACHE(
>>  IO-THREADS(
>>    WRITE-BEHIND
>>      READ_AHEAD(
>>        UNIFY(
>>          DATA(AFR(A,B), AFR(C,D)), NS(AFR(E,F)
>>        )
>>      )
>>    )
>>  )
>> )
>>
>> I do:
>> 1. mount glusterfs on client
>> 2. on client create few files/directories on mounted glusterfs
>> 3. shutdown brick A
>> 4. delete and recreate brick A local directory
>> 5. startup brick A
>> 6. on client access all files in mounted glusterfs directory.
>>
>> After such procedure no files/directories appear in local brick A
>> directory? Should they or I am missing something?
>>
>>
>> I think the file checksuming you described is overkill for my needs.
>> I think I will know if one of my HD drives brakes down and I will
>> replace it, but I need to workaround problem  with data recreation
>> described above.
>>
>>
>> --
>> Łukasz Osipiuk
>> mailto: address@hidden
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> address@hidden
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Raghavendra G
>
> A centipede was happy quite, until a toad in fun,
> Said, "Prey, which leg comes after which?",
> This raised his doubts to such a pitch,
> He fell flat into the ditch,
> Not knowing how to run.
> -Anonymous
>



-- 
Łukasz Osipiuk
mailto: address@hidden

reply via email to

[Prev in Thread] Current Thread [Next in Thread]