[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Fwd: [Gluster-devel] Performance

From: Jerker Nyberg
Subject: Re: Fwd: [Gluster-devel] Performance
Date: Mon, 15 Oct 2007 12:16:43 +0200 (CEST)


I've done some testing on old hardware (5 nodes with 1 GHz Athlon, 512 MB RAM, 40 GB IDE-disks, 100 Mbit/s ethernet). Both NFS and GlusterFS fill the ethernet when using large blocksizes. For small blocksizes however GlusterFS is running out of CPU... Is there any good way to minimize the CPU usage on the GlusterFS client and to increase small blocksize performance? I'm trying different combinations with the translators to see if I get slightly better configuration (but after removing all of them bonnie++ take ages to complete.) :)

bonnie++ 1.03:
                 Seq Output-----  Seq Input-----
                 Per Chr    CPU   Per Chr    CPU
   local disk:   17324 K/s  94%   18154 K/s  89%
   GlusterFS:     4305 K/s  21%    7195 K/s  36%
   NFS:           6197 K/s  29%   11253 K/s  56%

Well, NFS seem to be more efficient on this old hardware than GlusterFS...

One of the reasons that I find GlusterFS interesting is the possibility to increase the number of IO per second when using many drives in parallell. That has been the major performance bottleneck for me when running mail/webhosting. More drives help, and adding more servers instad of buying large RAIDs would be neat. In a way, yes it seem to scale, but as far as I understand at the cost that every node will spend more CPU accessing the file system. Perhaps not a problem on modern hardware?

I also tried some bittorrent-seeding from GlusterFS. For bittorrent-clients not using much read-ahead cache this is normally limited by the disk seek-time (IO/s). I was hoping that utilizing remote disks with a total of five times the IO/s than the local disk would increase the seeding rate, but instead GlusterFS and the bittorrent is just competing for CPU. Well well, for bittorrent the problem seem to be solved with readahead cache in the modern clients anyway, increasing the RAM cache hit-rate a lot.

All bonnie output here:

Jerker Nyberg.

On Mon, 15 Oct 2007, Steffen Grunewald wrote:

What's not so beautiful is that the first dd (always nfs) does
include staging of the file from the input media into buffer
cache (/dev/zero means: filling memory with zero bytes, which
certainly is faster than reading from a physical disk).
I would have repeated the write tests to see whether ordering
is important:
- nfs write
- glusterfs write
- nfs write again
- glusterfs write again
Buffers are often able to fool the benchmarker.

Also some information about your machine is missing - but I suppose 1GB
would easily fit into main memory. What about *several* GBs to effectively
trash the page cache?

Steffen, always doubtful when it comes to benchmarks

Steffen Grunewald * MPI Grav.Phys.(AEI) * Am M├╝hlenberg 1, D-14476 Potsdam
Cluster Admin * *
* e-mail: steffen.grunewald(*) * +49-331-567-{fon:7233,fax:7298}
No Word/PPT mails -

Gluster-devel mailing list

reply via email to

[Prev in Thread] Current Thread [Next in Thread]