qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH for-2.6 2/2] block/gluster: prevent data loss af


From: Jeff Cody
Subject: Re: [Qemu-block] [PATCH for-2.6 2/2] block/gluster: prevent data loss after i/o error
Date: Wed, 6 Apr 2016 08:44:18 -0400
User-agent: Mutt/1.5.21 (2010-09-15)

On Wed, Apr 06, 2016 at 01:02:16PM +0200, Kevin Wolf wrote:
> [ Adding some CCs ]
> 
> Am 06.04.2016 um 05:29 hat Jeff Cody geschrieben:
> > Upon receiving an I/O error after an fsync, by default gluster will
> > dump its cache.  However, QEMU will retry the fsync, which is especially
> > useful when encountering errors such as ENOSPC when using the werror=stop
> > option.  When using caching with gluster, however, the last written data
> > will be lost upon encountering ENOSPC.  Using the cache xlator option of
> > 'resync-failed-syncs-after-fsync' should cause gluster to retain the
> > cached data after a failed fsync, so that ENOSPC and other transient
> > errors are recoverable.
> > 
> > Signed-off-by: Jeff Cody <address@hidden>
> > ---
> >  block/gluster.c | 27 +++++++++++++++++++++++++++
> >  configure       |  8 ++++++++
> >  2 files changed, 35 insertions(+)
> > 
> > diff --git a/block/gluster.c b/block/gluster.c
> > index 30a827e..b1cf71b 100644
> > --- a/block/gluster.c
> > +++ b/block/gluster.c
> > @@ -330,6 +330,23 @@ static int qemu_gluster_open(BlockDriverState *bs,  
> > QDict *options,
> >          goto out;
> >      }
> >  
> > +#ifdef CONFIG_GLUSTERFS_XLATOR_OPT
> > +    /* Without this, if fsync fails for a recoverable reason (for instance,
> > +     * ENOSPC), gluster will dump its cache, preventing retries.  This 
> > means
> > +     * almost certain data loss.  Not all gluster versions support the
> > +     * 'resync-failed-syncs-after-fsync' key value, but there is no way to
> > +     * discover during runtime if it is supported (this api returns 
> > success for
> > +     * unknown key/value pairs) */
> 
> Honestly, this sucks. There is apparently no way to operate gluster so
> we can safely recover after a failed fsync. "We hope everything is fine,
> but depending on your gluster version, we may now corrupt your image"
> isn't very good.
> 
> We need to consider very carefully if this is good enough to go on after
> an error. I'm currently leaning towards "no". That is, we should only
> enable this after Gluster provides us a way to make sure that the option
> is really set.
> 
> > +    ret = glfs_set_xlator_option (s->glfs, "*-write-behind",
> > +                                           
> > "resync-failed-syncs-after-fsync",
> > +                                           "on");
> > +    if (ret < 0) {
> > +        error_setg_errno(errp, errno, "Unable to set xlator key/value 
> > pair");
> > +        ret = -errno;
> > +        goto out;
> > +    }
> > +#endif
> 
> We also need to consider the case without CONFIG_GLUSTERFS_XLATOR_OPT.
> In this case (as well as theoretically in the case that the option
> didn't take effect - if only we could know about it), a failed
> glfs_fsync_async() is fatal and we need to stop operating on the image,
> i.e. set bs->drv = NULL like when we detect corruption in qcow2 images.
> The guest will see a broken disk that fails all I/O requests, but that's
> better than corrupting data.
>

Gluster versions that don't support CONFIG_GLUSTERFS_XLATOR_OPT will
also not have the gluster patch that removes the file descriptor
invalidation upon error (unless that was a relatively new
bug/feature).  But if that is the case, every operation on the file
descriptor in those versions will return error.  But it is also rather
old versions that don't support glfs_set_xlator_option() I believe.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]