gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Rsync failure problem


From: Amar S. Tumballi
Subject: Re: [Gluster-devel] Rsync failure problem
Date: Sat, 12 Jul 2008 12:01:55 -0700

Any logs from GlusterFS? That would help a lot.

2008/7/12 skimber <address@hidden>:

>
> Hi Everyone,
>
> I have just set up glusterfs for the first time using two server machines
> and two clients.
>
> I'm trying to rsync a large amount of data (approx 1.2m files totalling
> 14Gb) from another server using the following command (real names changed!)
>
> rsync -v --progress --stats --links --safe-links --recursive
> www.domain.com::mydir /mydir
>
> Rsync connects, gets the file list and then as soon as it starts trying to
> copy files it dies, like this:
>
> receiving file list ...
> 1195208 files to consider
> .bash_history
> rsync: connection unexpectedly closed (24714877 bytes received so far)
> [generator]
> rsync error: error in rsync protocol data stream (code 12) at io.c(453)
> [generator=2.6.9]
>
> As soon as that happens the following lines are added, twice, to syslog:
>
> Jul 12 13:23:03 server01 kernel: hdb: dma_intr: status=0x51 { DriveReady
> SeekComplete Error }
> Jul 12 13:23:03 server01 kernel: hdb: dma_intr: error=0x40 {
> UncorrectableError }, LBAsect=345341, sector=345335
> Jul 12 13:23:03 server01 kernel: ide: failed opcode was: unknown
> Jul 12 13:23:03 server01 kernel: end_request: I/O error, dev hdb, sector
> 345335
>
> We were previously looking at a DRBD based solution with which rsync worked
> fine.  Nothing has changed at the remote server, the only difference is we
> are now using Gluster instead of DRBD.
>
> We are using glusterfs-1.3.9 and fuse-2.7.3glfs10 on Debian Etch
>
> The server config looks like this:
>
> volume posix
>  type storage/posix
>  option directory /data/export
> end-volume
>
> volume plocks
>  type features/posix-locks
>  subvolumes posix
> end-volume
>
> volume brick
>  type performance/io-threads
>  option thread-count 4
>  subvolumes plocks
> end-volume
>
> volume brick-ns
>  type storage/posix
>  option directory /data/export-ns
> end-volume
>
> volume server
>  type protocol/server
>  option transport-type tcp/server
>  option auth.ip.brick.allow *
>  option auth.ip.brick-ns.allow *
>  subvolumes brick brick-ns
> end-volume
>
>
>
> And the client config looks like this:
>
>
> volume brick1
>  type protocol/client
>  option transport-type tcp/client     # for TCP/IP transport
>  option remote-host data01      # IP address of the remote brick
>  option remote-subvolume brick        # name of the remote volume
> end-volume
>
> volume brick2
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host data02
>  option remote-subvolume brick
> end-volume
>
> volume brick-ns1
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host data01
>  option remote-subvolume brick-ns  # Note the different remote volume name.
> end-volume
>
> volume brick-ns2
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host data02
>  option remote-subvolume brick-ns  # Note the different remote volume name.
> end-volume
>
> volume afr1
>  type cluster/afr
>  subvolumes brick1 brick2
> end-volume
>
> volume afr-ns
>  type cluster/afr
>  subvolumes brick-ns1 brick-ns2
> end-volume
>
> volume unify
>  type cluster/unify
>  option namespace afr-ns
>  option scheduler rr
>  subvolumes afr1
> end-volume
>
>
> Any advice or suggestions would be greatly appreciated!
>
> Many thanks
>
> Simon
> --
> View this message in context:
> http://www.nabble.com/Rsync-failure-problem-tp18420195p18420195.html
> Sent from the gluster-devel mailing list archive at Nabble.com.
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!


reply via email to

[Prev in Thread] Current Thread [Next in Thread]