gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] GlusterFS 1.3.0-pre2.2: AFR setup


From: Gerry Reno
Subject: Re: [Gluster-devel] GlusterFS 1.3.0-pre2.2: AFR setup
Date: Sun, 04 Mar 2007 13:39:58 -0500
User-agent: Thunderbird 1.5.0.9 (X11/20061219)

Krishna Srinivas wrote:
Hi Gerry,

If there are four machines: server1 server2 server3 server4

server1 server spec file:
volume brick
    type storage/posix
    option directory /var/www
end-volume

### Add network serving capabilities to the "brick" volume
volume server
    type protocol/server
    option transport-type tcp/server
    option listen-port 6996
    subvolumes brick
    option auth.ip.brick.allow *
end-volume

server1 client spec file will be like this:
volume client1
    type storage/posix
       option directory /var/www
end-volume

volume client2
    type protocol/client
    option transport-type tcp/client
    option remote-host server2
    option remote-port 6996
    option remote-subvolume brick
end-volume

volume client3
    type protocol/client
    option transport-type tcp/client
    option remote-host server3
    option remote-port 6996
    option remote-subvolume brick
end-volume

volume client4
    type protocol/client
    option transport-type tcp/client
    option remote-host server4
    option remote-port 6996
    option remote-subvolume brick
end-volume

volume afr
    type cluster/afr
    subvolumes client1 client2 client3 client4
    option replicate *:4
end-volume

here all read() operations happen on client1 as it is mentioned
first in the list of subvolumes. so it serves your purpose.

Let us know if you need more help.


Hi Krishna,
 Ok if I understand this correctly are you saying that I only need the
server spec file on server1?  Or do I need one on all servers?
And I would need a slightly different client spec file on each client?
If so, is there a way to write the client spec file so that the same
client spec file could be copied to each node?

Regards,
Gerry






reply via email to

[Prev in Thread] Current Thread [Next in Thread]