gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] AFR problem with 2.0rc4


From: nicolas prochazka
Subject: Re: [Gluster-devel] AFR problem with 2.0rc4
Date: Thu, 19 Mar 2009 09:58:41 +0100

I'm trying last gluster from git,
bug is corrected, but there's seem to be a lot of weird comportment in AFR mode.
If i down one of two server, clients does not respond to a ls, or
respond but with not all file, just one....
I'm trying with and without lock server to 2 , 1 or 0  , results are the same.

Regards,
Nicolas Prochazka

On Wed, Mar 18, 2009 at 9:33 AM, Amar Tumballi <address@hidden> wrote:
> Hi Nicolas,
>  Sure, We are in the process of internal testing. It should be out as
> release soon. Meanwhile, you can pull from git and test it out.
>
> Regards,
>
> On Wed, Mar 18, 2009 at 1:30 AM, nicolas prochazka
> <address@hidden> wrote:
>>
>> Hello,
>> I see in git tree correction of afr heal bug ,
>> can wa test this release, is stable enough in compare rc release ?
>> nicolas
>>
>> On Tue, Mar 17, 2009 at 9:39 PM, nicolas prochazka
>> <address@hidden> wrote:
>> > My test is :
>> > Set two server in AFR mode
>> > copy file to mount point ( /mnt/vdisk ) :  ok  , synchro is ok on two
>> > server.
>> > Then delete (rm ) all file from storage on server 1 ( /mnt/disks/export
>> > )
>> > then wait for synchronisation.
>> > with rc2 and rc4  => file with good size ( ls -l) but nothing here (
>> > df -b shows no disk usage ) and files are corrupt
>> > with rc1 : all is ok, server resynchro perfectly., i think is the right
>> > way ;)
>> >
>> > nicoals
>> >
>> > On Tue, Mar 17, 2009 at 6:49 PM, Amar Tumballi <address@hidden> wrote:
>> >> Hi Nicolas,
>> >>  When you mean you 'add' a server here, you are adding another server
>> >> to
>> >> replicate subvolume? (ie, 2 to 3), or you had one server down when
>> >> copying
>> >> data (of 2 servers), and you bring back another server up and trigger
>> >> the
>> >> afr self heal ?
>> >>
>> >> Regards,
>> >> Amar
>> >>
>> >> On Tue, Mar 17, 2009 at 7:22 AM, nicolas prochazka
>> >> <address@hidden> wrote:
>> >>>
>> >>> Yes i'm trying  without any translator but bugs persists.
>> >>>
>> >>> Into logs i can not see anything interesting, size of file seems to be
>> >>> always ok when it begin synchronize.
>> >>> As i write before, if i cp files during normal operation ( 2 servers
>> >>> ok ) all is ok, problem appears only when i try to resynchronize ( rm
>> >>> all on one of server ( in storage/posix) directory, gluster recreate
>> >>> file but empty or with buggy data.
>> >>>
>> >>> I notice too, that with RC1, during resynchronise, if i try an ls on
>> >>> mount point, ls is blocking until synchronisation is ending, with RC2,
>> >>> ls is not blocking.
>> >>>
>> >>> Regards,
>> >>> Nicolas
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> On Tue, Mar 17, 2009 at 2:50 PM, Gordan Bobic <address@hidden>
>> >>> wrote:
>> >>> > Have you tried the later versions (rc2/rc4) without the performance
>> >>> > trasnlators? Does the problem persist without them? Anything
>> >>> > interesting
>> >>> > looking in the logs?
>> >>> >
>> >>> > On Tue, 17 Mar 2009 14:46:41 +0100, nicolas prochazka
>> >>> > <address@hidden> wrote:
>> >>> >> hello again,
>> >>> >> So this bug does not occur with RC1
>> >>> >>
>> >>> >> RC2,RC4 contains bug describe below, not RC1 , any idea ?
>> >>> >> Nicolas
>> >>> >>
>> >>> >> On Tue, Mar 17, 2009 at 12:55 PM, nicolas prochazka
>> >>> >> <address@hidden> wrote:
>> >>> >>> I 'm just trying with rc2 , same bug as rc4.
>> >>> >>> Regards,
>> >>> >>> Nicolas
>> >>> >>>
>> >>> >>> On Tue, Mar 17, 2009 at 12:06 PM, Gordan Bobic <address@hidden>
>> >>> > wrote:
>> >>> >>>> Can you check if it works correctly with 2.0rc2 and/or 2.0rc1?
>> >>> >>>>
>> >>> >>>> On Tue, 17 Mar 2009 12:04:33 +0100, nicolas prochazka
>> >>> >>>> <address@hidden> wrote:
>> >>> >>>>> oups,
>> >>> >>>>> same problem in fact with simple 8 bytes text file, the file
>> >>> >>>>> seems
>> >>> >>>>> to
>> >>> >>>>> be corrupt.
>> >>> >>>>>
>> >>> >>>>> Regards,
>> >>> >>>>> Nicolas Prochazka
>> >>> >>>>>
>> >>> >>>>> On Tue, Mar 17, 2009 at 11:20 AM, Gordan Bobic
>> >>> >>>>> <address@hidden>
>> >>> >>>>> wrote:
>> >>> >>>>>> Are you sure this is rc4 specific? I've seen assorted weirdness
>> >>> >>>>>> when
>> >>> >>>>>> adding
>> >>> >>>>>> and removing servers in all versions up to and including rc2
>> >>> >>>>>> (rc4
>> >>> >>>>>> seems
>> >>> >>>>>> to
>> >>> >>>>>> lock up when starting udev on it, so I'm not using it).
>> >>> >>>>>>
>> >>> >>>>>> On Tue, 17 Mar 2009 11:15:30 +0100, nicolas prochazka
>> >>> >>>>>> <address@hidden> wrote:
>> >>> >>>>>>> Hello guys,
>> >>> >>>>>>>
>> >>> >>>>>>> strange problem :
>> >>> >>>>>>> with rc4, afr synchronisation seems to be not work :
>> >>> >>>>>>> - If i copy a file on mount gluster, all is ok on all servers
>> >>> >>>>>>> - if i add a new server in gluster, this server create my
>> >>> >>>>>>> files (
>> >>> > 10G
>> >>> >>>>>>> size ) , it's appear on XFS as 10G file but file does not
>> >>> >>>>>>> contains
>> >>> >>>>>>> original, just some octets,
>> >>> >>>>>>> then gluster do not synchronise, perhaps because the size is
>> >>> >>>>>>> same.
>> >>> >>>>>>>
>> >>> >>>>>>> regards,
>> >>> >>>>>>> NP
>> >>> >>>>>>>
>> >>> >>>>>>>
>> >>> >>>>>>> volume brickless
>> >>> >>>>>>> type storage/posix
>> >>> >>>>>>> option directory /mnt/disks/export
>> >>> >>>>>>> end-volume
>> >>> >>>>>>>
>> >>> >>>>>>> volume brickthread
>> >>> >>>>>>> type features/posix-locks
>> >>> >>>>>>> option mandatory-locks on          # enables mandatory locking
>> >>> >>>>>>> on
>> >>> >>>>>>> all
>> >>> >>>>>> files
>> >>> >>>>>>> subvolumes brickless
>> >>> >>>>>>> end-volume
>> >>> >>>>>>>
>> >>> >>>>>>> volume brick
>> >>> >>>>>>> type performance/io-threads
>> >>> >>>>>>> option thread-count 4
>> >>> >>>>>>> subvolumes brickthread
>> >>> >>>>>>> end-volume
>> >>> >>>>>>>
>> >>> >>>>>>>
>> >>> >>>>>>> volume server
>> >>> >>>>>>> type protocol/server
>> >>> >>>>>>> subvolumes brick
>> >>> >>>>>>> option transport-type tcp
>> >>> >>>>>>> option auth.addr.brick.allow 10.98.98.*
>> >>> >>>>>>> end-volume
>> >>> >>>>>>>
>> >>> >>>>>>>
>> >>> >>>>>>>
>> >>> >>>>>>> -------------------------------------------
>> >>> >>>>>>>
>> >>> >>>>>>>
>> >>> >>>>>>>
>> >>> >>>>>>> volume brick_10.98.98.1
>> >>> >>>>>>> type protocol/client
>> >>> >>>>>>> option transport-type tcp/client
>> >>> >>>>>>> option transport-timeout 120
>> >>> >>>>>>> option remote-host 10.98.98.1
>> >>> >>>>>>> option remote-subvolume brick
>> >>> >>>>>>> end-volume
>> >>> >>>>>>>
>> >>> >>>>>>>
>> >>> >>>>>>> volume brick_10.98.98.2
>> >>> >>>>>>> type protocol/client
>> >>> >>>>>>> option transport-type tcp/client
>> >>> >>>>>>> option transport-timeout 120
>> >>> >>>>>>> option remote-host 10.98.98.2
>> >>> >>>>>>> option remote-subvolume brick
>> >>> >>>>>>> end-volume
>> >>> >>>>>>>
>> >>> >>>>>>>
>> >>> >>>>>>> volume last
>> >>> >>>>>>> type cluster/replicate
>> >>> >>>>>>> subvolumes brick_10.98.98.1 brick_10.98.98.2
>> >>> >>>>>>> option read-subvolume brick_10.98.98.1
>> >>> >>>>>>> option favorite-child brick_10.98.98.1
>> >>> >>>>>>> end-volume
>> >>> >>>>>>> volume iothreads
>> >>> >>>>>>> type performance/io-threads
>> >>> >>>>>>> option thread-count 4
>> >>> >>>>>>> subvolumes last
>> >>> >>>>>>> end-volume
>> >>> >>>>>>>
>> >>> >>>>>>> volume io-cache
>> >>> >>>>>>> type performance/io-cache
>> >>> >>>>>>> option cache-size 2048MB             # default is 32MB
>> >>> >>>>>>> option page-size  128KB             #128KB is default option
>> >>> >>>>>>> option cache-timeout 2  # default is 1
>> >>> >>>>>>> subvolumes iothreads
>> >>> >>>>>>> end-volume
>> >>> >>>>>>>
>> >>> >>>>>>> volume writebehind
>> >>> >>>>>>> type performance/write-behind
>> >>> >>>>>>> option aggregate-size 128KB # default is 0bytes
>> >>> >>>>>>> option window-size 512KB
>> >>> >>>>>>> option flush-behind off      # default is 'off'
>> >>> >>>>>>> subvolumes io-cache
>> >>> >>>>>>> end-volume
>> >>> >>>>>>>
>> >>> >>>>>>>
>> >>> >>>>>>> _______________________________________________
>> >>> >>>>>>> Gluster-devel mailing list
>> >>> >>>>>>> address@hidden
>> >>> >>>>>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>> >>> >>>>>>
>> >>> >>>>>>
>> >>> >>>>>> _______________________________________________
>> >>> >>>>>> Gluster-devel mailing list
>> >>> >>>>>> address@hidden
>> >>> >>>>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>> >>> >>>>>>
>> >>> >>>>
>> >>> >>>>
>> >>> >>>> _______________________________________________
>> >>> >>>> Gluster-devel mailing list
>> >>> >>>> address@hidden
>> >>> >>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>> >>> >>>>
>> >>> >>>
>> >>> >
>> >>> >
>> >>> > _______________________________________________
>> >>> > Gluster-devel mailing list
>> >>> > address@hidden
>> >>> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
>> >>> >
>> >>>
>> >>>
>> >>> _______________________________________________
>> >>> Gluster-devel mailing list
>> >>> address@hidden
>> >>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>> >>
>> >>
>> >>
>> >> --
>> >> Amar Tumballi
>> >>
>> >>
>> >
>>
>
>
>
> --
> Amar Tumballi
>
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]