[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] FUSE 2.7.3?

From: Brent A Nelson
Subject: Re: [Gluster-devel] FUSE 2.7.3?
Date: Tue, 6 May 2008 16:18:43 -0400 (EDT)

The unfs3 rm -rf issue occurs with a simple ext3 filesystem (and a ceph filesystem, both with Fuse and kernel clients), too, so it's not a GlusterFS bug. I'll see about contacting the Unfs3 author...



On Thu, 1 May 2008, Brent A Nelson wrote:

The nfs-kernel-server only seems to have the ESTALE issue with idle clients cd'ed into the GlusterFS. I have not encountered any other issue with nfs-kernel-server in recent GlusterFS builds.

However, the ESTALE issue is a bit of a showstopper, so I also test with Unfs3 user-mode NFS. It doesn't have the ESTALE issue, but it does have the rm issue (also a showstopper), exactly as described in the discussion thread which led to the FUSE patch.

The last version I tested was from the April 3 TLA archive, however.



PS There were earlier rm -rf issues which affected both kernel-nfs and Unfs3 that did get fixed, but this one still persists (unless it was fixed after April 3).

On Thu, 1 May 2008, Anand Avati wrote:

thanks for the pointer. We have been having that (similar/equivalent) fix
in GlusterFS since a while. And from what I recollect, this change has made
GlusterFS work fine over nfs-re-export (for solaris clients as well), and
the issue currently being faced is the ESTALE error when you keep a shell
idle for a while. Please correct me if otherwise.


2008/4/22 Brent A Nelson <address@hidden>:

Thanks, but it looks like I might as well stick with 2.7.2.  The patch is
actually for their fusexmp_fh.c example file, so the fix actually needs to
be made in all the client codes out there...

On that note, GlusterFS dudes, could you please take a look at the small
patch at: adjust the GlusterFS client code, accordingly? This should hopefully
eliminate the last known glitch with unfs3 reexport, which is that rm -rf
often doesn't work fully.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]