gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] GlusterFS High Availability Storage with GlusterFS o


From: Krishna Srinivas
Subject: Re: [Gluster-devel] GlusterFS High Availability Storage with GlusterFS on Centos5
Date: Fri, 10 Aug 2007 12:17:51 +0530

Hi Guido,

Where is it crashing when you had "several configs" on the server side?
Few checkins went yesterday and daybefore which might fix the
problem. Can you check with the latest code and see if you still see
crashes? If yes, please help us fix them.

And you spec files look OK. You can also use io-threads and io-cache
on the client side to improve performance.

Thanks
Krishna

On 8/8/07, Guido Smit <address@hidden> wrote:
> Hi all,
>
> I've just set up my new cluster with glusterfs. I've tried the examples
> on the site on GlusterFS High Availability Storage with GlusterFS, but
> it keeps crashing, so I've moved several things to the client
> config and it's running smooth now. My question now is, what can I do to
> increase performance and are there any mistakes I made in my configs?
>
> My servers all run Centos5 with glusterfs-1.3.pre7 and fuse-2.7.0 (from
> sourceforge).
>
> My configs:
>
> glusterfs-server.vol :
>
> volume mailspool-ds
>         type storage/posix                              # POSIX FS
> translator
>         option directory /home/export/mailspool         # Export this
> directory
> end-volume
>
> volume mailspool-ns
>         type storage/posix                              # POSIX FS
> translator
>         option directory /home/export/mailspool-ns      # Export this
> directory
> end-volume
>
> volume mailspool-ds-io              #iothreads can give performance a boost
>         type performance/io-threads
>         option thread-count 8
>         option cache-size 64MB
>         subvolumes mailspool-ds
> end-volume
>
> volume mailspool-ns-io
>         type performance/io-threads
>         option thread-count 8
>         option cache-size 64MB
>         subvolumes mailspool-ns
> end-volume
>
> volume server
>         type protocol/server
>         option transport-type tcp/server
>         subvolumes mailspool-ds-io mailspool-ns-io
>         option auth.ip.mailspool-ds-io.allow 192.168.1.*,127.0.0.1
>         option auth.ip.mailspool-ns-io.allow 192.168.1.*,127.0.0.1
> end-volume
>
>
> glusterfs-client.vol :
>
> ##############################################
> ###  GlusterFS Client Volume Specification  ##
> ##############################################
> volume pop2-ds
>         type protocol/client
>         option transport-type tcp/client        # for TCP/IP transport
>         option remote-host 192.168.1.42         # DNS Round Robin
> pointing towards all 3 GlusterFS servers
>         option remote-subvolume mailspool-ds-io # name of the remote volume
> end-volume
>
> volume pop2-ns
>         type protocol/client
>         option transport-type tcp/client        # for TCP/IP transport
>         option remote-host 192.168.1.42         # DNS Round Robin
> pointing towards all 3 GlusterFS servers
>         option remote-subvolume mailspool-ns-io # name of the remote volume
> end-volume
>
> volume smtp1-ds
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 192.168.1.224
>         option remote-subvolume mailspool-ds-io
> end-volume
>
> volume smtp1-ns
>         type protocol/client
>         option transport-type tcp/client
>         option remote-host 192.168.1.224
>         option remote-subvolume mailspool-ns-io
> end-volume
>
> volume mailspool-afr
>         type cluster/afr
>         subvolumes pop2-ds smtp1-ds
>         option replicate *:2
> end-volume
>
> volume mailspool-ns-afr
>         type cluster/afr
>         subvolumes pop2-ns smtp1-ns
>         option replicate *:2
> end-volume
>
> volume bricks
>         type cluster/unify
>         subvolumes mailspool-afr
>         option namespace mailspool-ns-afr
>         option scheduler alu
>         option alu.limits.min-free-disk  60GB              # Stop
> creating files when free-space lt 60GB
>         option alu.limits.max-open-files 10000
>         option alu.order
> disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
>         option alu.disk-usage.entry-threshold 2GB          # Units in
> KB, MB and GB are allowed
>         option alu.disk-usage.exit-threshold  60MB         # Units in
> KB, MB and GB are allowed
>         option alu.open-files-usage.entry-threshold 1024
>         option alu.open-files-usage.exit-threshold 32
>         option alu.stat-refresh.interval 10sec
> end-volume
>
> ### Add writeback feature
> volume writeback
>         type performance/write-behind
>         option aggregate-size 131072 # unit in bytes
>         subvolumes bricks
> end-volume
>
> ### Add readahead feature
> volume readahead
>         type performance/read-ahead
>         option page-size 65536     # unit in bytes
>         option page-count 16       # cache per file  = (page-count x
> page-size)
>         subvolumes writeback
> end-volume
>
> --
> Regards,
>
> Guido Smit
>
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]