gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Local vs unify


From: Gareth Bult
Subject: Re: [Gluster-devel] Local vs unify
Date: Mon, 28 Apr 2008 10:31:27 +0100 (BST)

>As we are talking about Distributed/Clustered FS, we haven't given any 
>benchmark comparing to NFS or localdisk. 

Sorry, I was looking at this; 
http://www.gluster.org/docs/index.php/GlusterFS_1.2.1-BENKI_Aggregated_I/O_vs_NFSv4_Benchmark
 

I read this as GlusterFS .vs NFSv4 .. in which GlusterFS kicks ass .. am I 
mis-reading .. ? 

Indeed it quotes; 

>Single GlusterFS brick can perform better than single NFS server. 

Incidentally, I do agree with this statement, however in the context of 
"statistics" it could be construed as a little misleading. 

>Yes! agreed. When ever we tried to improve upon documentation, we don't see 
>problem as we can't see 
>it as a newbie, we end up writing a proper spec file, compiling properly, etc, 
>hence we requested all to 
>point out to errors in Documentation so we can correct it. Till now we have 
>corrected all those pages/errors 
> which were pointed out here through mailing-list. 

Generally I think the documentation is excellent, this wasn't a criticism .. 
This was simply a caveat if it turns out that there are ways around the issues 
I'm finding that I am unaware of .. :) 

>Again, we are not comparing with local disk. But still our goal is to reach 
>near disk speed 
>with using Infiniband RDMA. Already we have reached it for I/O performance. As 
>we have 
>no metadata server, (namespace is just a cache, not metadata), the clustered 
>performance 
>for metadata will surely look poor compared to localdisk, we are working on 
>enhancing our 
> metadata performance too. 

That's fantastic, I was about to say that I have also (as you may have 
gathered) managed to obtain > local disk performance from Gluster under some 
circumstances and the fact that you are working on enhancing the metadata 
answers many of my queries / issues. 

Indeed there are only two comments I would make; 

a. It would be nice to have something (have I missed it?) in the documentation 
(maybe in big letters .. ;) ) saying "metadata might be an issue for some 
people at the moment which would result in slow performance under circumstances 
(x,y,z)". This would have saved much hair tearing time when I'd assumed "I" was 
as fault. (and you will notice historically I'm not the only one who has 
commented on this issue ..) 

b. Having people on the mailing list say "yeah, but it was designed this way", 
implying no changes were expected to the way it works, isn't all that helpful 
.. if nobody is going to pipe up and say "we working on it". 

>Thanks for supporting our design. We are working towards fixing those few 
>glitches! 
>But true that things are not changing as fast as we have wished. Each new idea 
>needs 
>time to get converted into code, get tested. Hence bit delay in things. 

No problem and thank you for this email, it has answered a major issue for me 
.. I am of course going to ask; 
a. Any timescale on the metadata changes? 
b. How much of a difference will it make.. will we be approaching local(ish) 
speeds .. or are we just talking x2 of current? 

Referring to the other post I just responded to, I DO appreciate the time 
developers put into this (and other) projects .. however as user who has made a 
significant time investment in the project (I also do not get paid for my time 
spent on Gluster!) it's always nice to feel that the information flow is 2-way 
.. be nice for example to see updates to trouble tickets on the tracker ... ;-) 

Just for reference; 
21916, 31 Dec 07 - Self-heal on sparse files 
21917, 31 Dec 07 - Self-heal on striped volumes 

:) 

Ok, one more; 

Documentation suggestions; 

"Issues we are aware of" .... 
(with either "won't fix", "may fix sometime" "fix by date xxx" ...) 

:) 

Gareth. 

----- Original Message ----- 
From: "Amar S. Tumballi" <address@hidden> 
To: "Gareth Bult" <address@hidden> 
Cc: "Alain Baeckeroot" <address@hidden>, address@hidden 
Sent: Monday, April 28, 2008 9:14:44 AM GMT +00:00 GMT Britain, Ireland, 
Portugal 
Subject: Re: [Gluster-devel] Local vs unify 

Hi Alain, Gareth, 
Replies inline. 


On Sun, Apr 27, 2008 at 5:33 AM, Gareth Bult < address@hidden > wrote: 



>Are there some benchmrks available about this ? 

I'm not entirely sure how useful benchmarks are, certainly in this context. As 
demonstrated by the benchmarks currently on the site, it's quite possible to 
make them show pretty much whatever you want if you pick your own context. 

Gluster REALLY is quick in some contexts, certainly I can make Gluster look 
quick compared to NFS if we're talking about access to a single file or copying 
larger files. If on the other hand we're talking smaller files (i.e. many real 
world situations) then Gluster falls flat. "find" on a large gluster FS can 
take minutes rather than seconds on a local FS (for example). 

As we are talking about Distributed/Clustered FS, we haven't given any 
benchmark comparing to NFS or localdisk. 



It may of course be I'm doing it all wrong .. however (!) if I am, given the 
time I've spent and the fact that I do have it all working, there may be room 
for improvement when it comes to the documentation (!) 

Yes! agreed. When ever we tried to improve upon documentation, we don't see 
problem as we can't see it as a newbie, we end up writing a proper spec file, 
compiling properly, etc, hence we requested all to point out to errors in 
Documentation so we can correct it. Till now we have corrected all those 
pages/errors which were pointed out here through mailing-list. 



>local storeage CAN be notable 

Note; gluster (in particular AFR) on large files is currently flawed (IMHO). 
gluster on lots of small files is, as far as I can see flawed in the context of 
being too slow compared to local file-system access. This is not to say that 
Gluster is not useful or that issues cannot be fixed. 


Again, we are not comparing with local disk. But still our goal is to reach 
near disk speed with using Infiniband RDMA. Already we have reached it for I/O 
performance. As we have no metadata server, (namespace is just a cache, not 
metadata), the clustered performance for metadata will surely look poor 
compared to localdisk, we are working on enhancing our metadata performance 
too. 



You might be advised to setup your own test framework and test with your own 
data to get a true measure of how Gluster will perform for "you" in "your" 
environment ... Gluster is SO flexible, generic benchmarks can often be nothing 
more than speculation.. 


Just to clarify; I think Gluster's general design is second to none .. I just 
there there are still a few implementation glitches to work through ... 


Thanks for supporting our design. We are working towards fixing those few 
glitches! But true that things are not changing as fast as we have wished. Each 
new idea needs time to get converted into code, get tested. Hence bit delay in 
things. 




Gareth. 





----- Original Message ----- 
From: "Alain Baeckeroot" < address@hidden > 
To: address@hidden 
Sent: Sunday, April 27, 2008 8:04:55 AM GMT +00:00 GMT Britain, Ireland, 
Portugal 
Subject: Re: [Gluster-devel] Local vs unify 

Le samedi 26 avril 2008, Gareth Bult a écrit : 
> Technically, if you have local storage on each node then GlusterFS/Unify is a 
> useful solution, but the performance overhead compared to local storeage can 
> be notable. 
> 

Are there some benchmrks available about this ? 

Regards 
Alain Baeckeroot 



_______________________________________________ 
Gluster-devel mailing list 
address@hidden 
http://lists.nongnu.org/mailman/listinfo/gluster-devel 


_______________________________________________ 
Gluster-devel mailing list 
address@hidden 
http://lists.nongnu.org/mailman/listinfo/gluster-devel 



-- 
Amar Tumballi 
Gluster/GlusterFS Hacker 
[bulde on #gluster/ irc.gnu.org ] 
http://www.zresearch.com - Commoditizing Super Storage!


reply via email to

[Prev in Thread] Current Thread [Next in Thread]