qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v7 RFC] block/vxhs: Initial commit to add Verita


From: Jeff Cody
Subject: Re: [Qemu-devel] [PATCH v7 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support
Date: Wed, 5 Oct 2016 00:02:59 -0400
User-agent: Mutt/1.5.24 (2015-08-30)

On Wed, Sep 28, 2016 at 12:13:32PM +0100, Stefan Hajnoczi wrote:
> On Tue, Sep 27, 2016 at 09:09:49PM -0700, Ashish Mittal wrote:

[...]

> > +/*
> > + * This is called by QEMU when a flush gets triggered from within
> > + * a guest at the block layer, either for IDE or SCSI disks.
> > + */
> > +static int vxhs_co_flush(BlockDriverState *bs)
> 
> This is called from coroutine context, please add the coroutine_fn
> function attribute to document this.
> 
> > +{
> > +    BDRVVXHSState *s = bs->opaque;
> > +    int64_t size = 0;
> > +    int ret = 0;
> > +
> > +    /*
> > +     * VDISK_AIO_FLUSH ioctl is a no-op at present and will
> > +     * always return success. This could change in the future.
> > +     */
> > +    ret = vxhs_qnio_iio_ioctl(s->qnio_ctx,
> > +            s->vdisk_hostinfo[s->vdisk_cur_host_idx].vdisk_rfd,
> > +            VDISK_AIO_FLUSH, &size, NULL, IIO_FLAG_SYNC);
> 
> This function is not allowed to block.  It cannot do a synchronous
> flush.  This line is misleading because the constant is called
> VDISK_AIO_FLUSH, but looking at the library code I see it's actually a
> synchronous call that ends up in a loop that sleeps (!) waiting for the
> response.
> 
> Please do an async flush and qemu_coroutine_yield() to return
> control to QEMU's event loop.  When the flush completes you can
> qemu_coroutine_enter() again to return from this function.
> 
> > +
> > +    if (ret < 0) {
> > +        trace_vxhs_co_flush(s->vdisk_guid, ret, errno);
> > +        vxhs_close(bs);
> 
> This looks unsafe.  Won't it cause double close() calls for s->fds[]
> when bdrv_close() is called later?
> 

Calling the close on a failed flush is a good idea, in my opinion.  However,
to make it safe, bs->drv MUST be set to NULL after the call to vxhs_close().
That will prevent double free / close calls,  and also fail out new I/O.
(This is actually what the gluster driver does, for instance).

Jeff






reply via email to

[Prev in Thread] Current Thread [Next in Thread]