[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Quick Question: glusterFS and kernel (stat) caching

From: Anand Avati
Subject: Re: [Gluster-devel] Quick Question: glusterFS and kernel (stat) caching
Date: Wed, 8 Aug 2007 12:15:50 +0530

 In fuse based filesystems, the filesystem can set the stat cache timeout
value. By default it is 1 second. On disk based filesystems this timeout is
'infinite' (till the dentry cache is shrunk, stat() never goes to the disk
for a second time). But for network based filesystems it is unsafe to set
such high stat cache timeouts. Still you are free to try. Currently in the
GlusterFS codebase this value is hardcoded to 1 sec. I'll be adding a
command line argument to set this timeout for you to experiment with it.
Please let us know your results.


2007/8/6, Bernhard J. M. Grün <address@hidden>:
> Hello developers!
> At the moment we try to optimize our web server setup again.
> We tried to parallelize the stat calls in our web server software
> (lighttpd). But it seems the kernel does not cache the stat
> information from one request for further requests. But the same setup
> works fine on a local file system. So it seems that glusterFS and/or
> FUSE is not able to communicate with the kernel (stat) cache. Is this
> right? And is this problem solvable?
> Here is some schematic diagram of our approach:
> request -> lighttpd -> Threaded FastCGI program, that does only a stat
> (via fopen) -> lighttpd opens the file for reading and writes the date
> to the socket
> In a local scenario the second open uses the cached stat and so it
> does not block other reads in lighty at that point. but with a
> glusterFS mount it still blocks there.
> Maybe you can give me some advice. Thank you!
> Bernhard J. M. Grün
> _______________________________________________
> Gluster-devel mailing list
> address@hidden

It always takes longer than you expect, even when you take into account
Hofstadter's Law.

-- Hofstadter's Law

reply via email to

[Prev in Thread] Current Thread [Next in Thread]