qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v7 RFC] block/vxhs: Initial commit to add Verita


From: ashish mittal
Subject: Re: [Qemu-devel] [PATCH v7 RFC] block/vxhs: Initial commit to add Veritas HyperScale VxHS block device support
Date: Tue, 11 Oct 2016 00:56:06 -0700

Checked in a test server to libqnio that allows to open, read, write
to a vxhs vdisk using the qemu-io binary.

Steps to run the test server:
(1) Touch a file in /tmp with the base-name of the vdisk ID to be opened.
touch /tmp/\{98f48eef-b62f-46ee-b6e3-ad48ffd9ad0a\}
(2) Start the test server. It listens on port 9999 by default.
/path/to/qnio_server
(3) In another terminal, start the qemu-io binary and open the vdisk
qemu-io> open vxhs://127.0.0.1:9999/%7B98f48eef-b62f-46ee-b6e3-ad48ffd9ad0a%7D
(4) Now you can write and read data from the vdisk that is backed by a file.
qemu-io> writev -P 81 0 1k
qemu-io> read -v 0 1k

Following change would be needed to my last patch to allow opening of the vdisk:
$ git diff
diff --git a/block/vxhs.c b/block/vxhs.c
index 90a4343..d849a9b 100644
--- a/block/vxhs.c
+++ b/block/vxhs.c
@@ -1215,7 +1215,7 @@ static int vxhs_qemu_init(QDict *options,
BDRVVXHSState *s,
     }

     ret = vxhs_qnio_iio_open(cfd, of_vsa_addr, rfd, file_name);
-    if (!ret) {
+    if (ret) {
         error_setg(&local_err, "Failed qnio_iio_open");
         ret = -EIO;
     }

Will work on the qemu-iotests test suite next.

Regards,
Ashish

On Tue, Oct 4, 2016 at 9:02 PM, Jeff Cody <address@hidden> wrote:
> On Wed, Sep 28, 2016 at 12:13:32PM +0100, Stefan Hajnoczi wrote:
>> On Tue, Sep 27, 2016 at 09:09:49PM -0700, Ashish Mittal wrote:
>
> [...]
>
>> > +/*
>> > + * This is called by QEMU when a flush gets triggered from within
>> > + * a guest at the block layer, either for IDE or SCSI disks.
>> > + */
>> > +static int vxhs_co_flush(BlockDriverState *bs)
>>
>> This is called from coroutine context, please add the coroutine_fn
>> function attribute to document this.
>>
>> > +{
>> > +    BDRVVXHSState *s = bs->opaque;
>> > +    int64_t size = 0;
>> > +    int ret = 0;
>> > +
>> > +    /*
>> > +     * VDISK_AIO_FLUSH ioctl is a no-op at present and will
>> > +     * always return success. This could change in the future.
>> > +     */
>> > +    ret = vxhs_qnio_iio_ioctl(s->qnio_ctx,
>> > +            s->vdisk_hostinfo[s->vdisk_cur_host_idx].vdisk_rfd,
>> > +            VDISK_AIO_FLUSH, &size, NULL, IIO_FLAG_SYNC);
>>
>> This function is not allowed to block.  It cannot do a synchronous
>> flush.  This line is misleading because the constant is called
>> VDISK_AIO_FLUSH, but looking at the library code I see it's actually a
>> synchronous call that ends up in a loop that sleeps (!) waiting for the
>> response.
>>
>> Please do an async flush and qemu_coroutine_yield() to return
>> control to QEMU's event loop.  When the flush completes you can
>> qemu_coroutine_enter() again to return from this function.
>>
>> > +
>> > +    if (ret < 0) {
>> > +        trace_vxhs_co_flush(s->vdisk_guid, ret, errno);
>> > +        vxhs_close(bs);
>>
>> This looks unsafe.  Won't it cause double close() calls for s->fds[]
>> when bdrv_close() is called later?
>>
>
> Calling the close on a failed flush is a good idea, in my opinion.  However,
> to make it safe, bs->drv MUST be set to NULL after the call to vxhs_close().
> That will prevent double free / close calls,  and also fail out new I/O.
> (This is actually what the gluster driver does, for instance).
>
> Jeff
>
>
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]