gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Re-exporting NFS to vmware


From: Gordan Bobic
Subject: Re: [Gluster-devel] Re-exporting NFS to vmware
Date: Thu, 06 Jan 2011 20:59:39 +0000
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.4) Gecko/20100628 Red Hat/3.1-1.el6 Lightning/1.0b2 Thunderbird/3.1

On 01/06/2011 06:52 PM, Christopher Hawkins wrote:
This is not scalable to more than a few servers, but one possibility
for this type of setup is to use LVS in conjunction with Gluster
storage.

Say there were 3 gluster storage nodes doing a 3x replicate. One of
them (or a small 4th server) could have a virtual IP that accepts
inbound NFS requests and then forwards them to one of the 3 "real
server" gluster storage nodes. Each would be listening on NFS,
would have all the files, and would be able to serve directly and
respond with the IP that the packet was originally addressed to.

If I am understanding what you are describing correctly, you are talking about using LVS in direct routing mode. That's pretty similar to what I was describing. This would be scalable right up to the point where the LB runs out of bandwidth to handle client responses. This is probably fine for situations where clients are heavy readers. It wouldn't scale at all of the clients are heavy writers.

What I was talking about in the previous email is having each service node act as a LB itself, and able to load balance traffic to other nodes based on which the actual files reside - it'd have to be a GlusterFS translator because it would have to look up what node has the file and pass the request to that node.

It would essentially require a GlusterFS translator that combines LVS functionality with the GLFS interfaces to establish which node has the requested file.

And if I were to find myself with a month or two of coding time on my hands I might even be tempted to implement such a thing - but don't hold your breath, it's unlikely to happen any time soon. :)

Gordan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]