qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH 2/6] block: Acquire the AioContext in scsi_*_rea


From: Kevin Wolf
Subject: Re: [Qemu-block] [PATCH 2/6] block: Acquire the AioContext in scsi_*_realize()
Date: Fri, 11 Jan 2019 16:02:13 +0100
User-agent: Mutt/1.10.1 (2018-07-13)

Am 10.01.2019 um 16:03 hat Alberto Garcia geschrieben:
> This fixes the following crash:
> 
> { "execute": "blockdev-add",
>   "arguments": {"driver": "null-co", "node-name": "hd0"}}
> { "execute": "object-add",
>   "arguments": {"qom-type": "iothread", "id": "iothread0"}}
> { "execute": "x-blockdev-set-iothread",
>   "arguments": {"node-name": "hd0", "iothread": "iothread0"}}
> { "execute": "device_add",
>    "arguments": {"id": "scsi-pci0", "driver": "virtio-scsi-pci"}}
> { "execute": "device_add",
>   "arguments": {"id": "scsi-hd0", "driver": "scsi-hd", "drive": "hd0"}}
> qemu: qemu_mutex_unlock_impl: Operation not permitted
> Aborted
> 
> Signed-off-by: Alberto Garcia <address@hidden>

> @@ -2553,6 +2563,7 @@ static int get_device_type(SCSIDiskState *s)
>  static void scsi_block_realize(SCSIDevice *dev, Error **errp)
>  {
>      SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev, dev);
> +    AioContext *ctx;
>      int sg_version;
>      int rc;
>  
> @@ -2568,7 +2579,10 @@ static void scsi_block_realize(SCSIDevice *dev, Error 
> **errp)
>      }
>  
>      /* check we are using a driver managing SG_IO (version 3 and after) */
> +    ctx = blk_get_aio_context(s->qdev.conf.blk);
> +    aio_context_acquire(ctx);
>      rc = blk_ioctl(s->qdev.conf.blk, SG_GET_VERSION_NUM, &sg_version);
> +    aio_context_release(ctx);
>      if (rc < 0) {
>          error_setg_errno(errp, -rc, "cannot get SG_IO version number");
>          if (rc != -EPERM) {

This is probably not enough. get_device_type() and
scsi_generic_read_device_inquiry() below issue more ioctls (but we need
to be careful not to include the scsi_realize() call in the locked
section if you take the lock again there).

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]