gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] NUFA Scheduler


From: Amar S. Tumballi
Subject: Re: [Gluster-devel] NUFA Scheduler
Date: Thu, 15 May 2008 17:58:50 -0700

Well, I would say that won't work.

http://www.gluster.org/docs/index.php/AFR_single_process

Check the above link. It should works fine. This can extended further to
unify with NUFA (afr with the local posix volume as local-volume-name).

-amar


On Thu, May 15, 2008 at 5:10 PM, Gordan Bobic <address@hidden> wrote:

> OK, I clearly misunderstood something.
> All I am trying to achieve is have the same process handle the client and
> server aspects, so that mounting the AFR-ed FS gets the local server going,
> too. Is that not achievable? Because what I have _seems_ to actually work,
> only I cannot mount the FS from a single server from a remote node, I just
> get "end point not connected". But files between the two servers do get
> replicated.
>
> Attached is a sample config of what I'm talking about. The other server is
> the same only with home1 and home2 volumes reversed.
>
> Gordan
>
>
> Anand Babu Periasamy wrote:
>
>> Hi Gordan,
>> Schedulers are only available for Unify translator. AFR doesnt support
>> NUFA.
>>
>> Usually for HPC type workload, we do two separate mounts.
>> 1) Unify (with NUFA scheduler) of all the nodes
>> 2) Unify (with RR scheduler) of all AFRs. Each AFR mirrors two nodes.
>>
>> --
>> Anand Babu Periasamy
>> GPG Key ID: 0x62E15A31
>> Blog [http://ab.freeshell.org]
>> The GNU Operating System [http://www.gnu.org]
>> Z RESEARCH Inc [http://www.zresearch.com]
>>
>>
>>
>> Gordan Bobic wrote:
>>
>>> Hi,
>>>
>>> I'm trying to use the NUFA scheduler and it works fine between the
>>> machines that are both clients and servers, but I'm not having much luck
>>> with getting client-only machines to mount the share.
>>>
>>> I'm guessing that the problem is that I am trying to wrap a "type
>>> cluster/afr" brick with "option scheduler nufa" with a protocol/server, and
>>> export that. Is there any reason why this shouldn't work? Or do I have to
>>> set up a separate AFR brick without "option scheduler nufa" to export with a
>>> brick of "type protocol/server"?
>>>
>>
> volume home1
>        type storage/posix
>        option directory /gluster/home
> end-volume
>
> volume storage-home
>        type protocol/server
>        option transport-type tcp/server
>        option listen-port 6996
>        subvolumes home1
>        option auth.ip.storage-home.allow 192.168.*
> end-volume
>
> volume home2
>        type protocol/client
>        option transport-type tcp/client
>        option remote-host 192.168.3.1
>        option remote-port 6996
>        option remote-subvolume storage-home
> end-volume
>
> volume home-afr
>        type cluster/afr
>        option read-subvolume home1
>        option scheduler nufa
>        option nufa.local-volume-name home1
>        option nufa.limits.min-free-disk 1%
>        subvolumes home1 home2
> end-volume
>
> volume home
>        type protocol/server
>        option transport-type tcp/server
>        option listen-port 6997
>        option client-volume-filename /etc/glusterfs/glusterfs-client.vol
>        subvolumes home-afr
>        option auth.ip.home.allow 127.0.0.1,192.168.*
> end-volume
>
>
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>


-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!


reply via email to

[Prev in Thread] Current Thread [Next in Thread]