qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [RFC PATCH] virtio-blk: schedule virtio_notify_config t


From: Sergio Lopez
Subject: Re: [Qemu-block] [RFC PATCH] virtio-blk: schedule virtio_notify_config to run on main context
Date: Fri, 13 Sep 2019 09:46:23 +0200
User-agent: mu4e 1.2.0; emacs 26.2

Michael S. Tsirkin <address@hidden> writes:

> On Thu, Sep 12, 2019 at 08:19:25PM +0200, Sergio Lopez wrote:
>> Another AioContext-related issue, and this is a tricky one.
>> 
>> Executing a QMP block_resize request for a virtio-blk device running
>> on an iothread may cause a deadlock involving the following mutexes:
>> 
>>  - main thead
>>   * Has acquired: qemu_mutex_global.
>>   * Is trying the acquire: iothread AioContext lock via
>>     AIO_WAIT_WHILE (after aio_poll).
>> 
>>  - iothread
>>   * Has acquired: AioContext lock.
>>   * Is trying to acquire: qemu_mutex_global (via
>>     virtio_notify_config->prepare_mmio_access).
>
> Hmm is this really the only case iothread takes qemu mutex?

Not the only one that takes the mutex, but the only one so far we found
doing so upon request from a job running on the main thread (should be
quite noticeable, due to the deadlock).

> If any such access can deadlock, don't we need a generic
> solution? Maybe main thread can drop qemu mutex
> before taking io thread AioContext lock?

The mutex is acquired very early at os_host_main_loop_wait(), so I
assume there may be many assumptions in multiple code paths that it has
been acquired.

>> With this change, virtio_blk_resize checks if it's being called from a
>> coroutine context running on a non-main thread, and if that's the
>> case, creates a new coroutine and schedules it to be run on the main
>> thread.
>> 
>> This works, but means the actual operation is done
>> asynchronously, perhaps opening a window in which a "device_del"
>> operation may fit and remove the VirtIODevice before
>> virtio_notify_config() is executed.
>> 
>> I *think* it shouldn't be possible, as BHs will be processed before
>> any new QMP/monitor command, but I'm open to a different approach.
>> 
>> RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1744955
>> Signed-off-by: Sergio Lopez <address@hidden>
>> ---
>>  hw/block/virtio-blk.c | 25 ++++++++++++++++++++++++-
>>  1 file changed, 24 insertions(+), 1 deletion(-)
>> 
>> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
>> index 18851601cb..c763d071f6 100644
>> --- a/hw/block/virtio-blk.c
>> +++ b/hw/block/virtio-blk.c
>> @@ -16,6 +16,7 @@
>>  #include "qemu/iov.h"
>>  #include "qemu/module.h"
>>  #include "qemu/error-report.h"
>> +#include "qemu/main-loop.h"
>>  #include "trace.h"
>>  #include "hw/block/block.h"
>>  #include "hw/qdev-properties.h"
>> @@ -1086,11 +1087,33 @@ static int virtio_blk_load_device(VirtIODevice 
>> *vdev, QEMUFile *f,
>>      return 0;
>>  }
>>  
>> +static void coroutine_fn virtio_resize_co_entry(void *opaque)
>> +{
>> +    VirtIODevice *vdev = opaque;
>> +
>> +    assert(qemu_get_current_aio_context() == qemu_get_aio_context());
>> +    virtio_notify_config(vdev);
>> +    aio_wait_kick();
>> +}
>> +
>>  static void virtio_blk_resize(void *opaque)
>>  {
>>      VirtIODevice *vdev = VIRTIO_DEVICE(opaque);
>> +    Coroutine *co;
>>  
>> -    virtio_notify_config(vdev);
>> +    if (qemu_in_coroutine() &&
>> +        qemu_get_current_aio_context() != qemu_get_aio_context()) {
>> +        /*
>> +         * virtio_notify_config() needs to acquire the global mutex,
>> +         * so calling it from a coroutine running on a non-main context
>> +         * may cause a deadlock. Instead, create a new coroutine and
>> +         * schedule it to be run on the main thread.
>> +         */
>> +        co = qemu_coroutine_create(virtio_resize_co_entry, vdev);
>> +        aio_co_schedule(qemu_get_aio_context(), co);
>> +    } else {
>> +        virtio_notify_config(vdev);
>> +    }
>>  }
>>  
>>  static const BlockDevOps virtio_block_ops = {
>> -- 
>> 2.21.0

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]