qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: QMP (without OOB) function running in thread different from the main


From: Juan Quintela
Subject: Re: QMP (without OOB) function running in thread different from the main thread as part of aio_poll
Date: Tue, 02 May 2023 12:35:35 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)

Fiona Ebner <f.ebner@proxmox.com> wrote:
> Am 02.05.23 um 12:03 schrieb Fiona Ebner:
>> Am 28.04.23 um 18:54 schrieb Juan Quintela:
>>> Kevin Wolf <kwolf@redhat.com> wrote:
>>>> Am 28.04.2023 um 10:38 hat Juan Quintela geschrieben:
>>>>> Kevin Wolf <kwolf@redhat.com> wrote:
>>>>>>> I am perhaps a bit ingenuous here, but it is there a way to convince
>>>>>>> qemu that snapshot_save_job_bh *HAS* to run on the main thread?
>>>>>>
>>>>>> I believe we're talking about a technicality here. I asked another more
>>>>>> fundamental question that nobody has answered yet:
>>>>>>
>>>>>> Why do you think that it's ok to call bdrv_writev_vmstate() without
>>>>>> holding the BQL?
>>>>>
>>>>> I will say this function starts by bdrv_ (i.e. block layer people) and
>>>>> endes with _vmstate (i.e. migration people).
>>>>>
>>>>> To be honest, I don't know.  That is why I _supposed_ you have an idea.
>>>>
>>>> My idea is that bdrv_*() can only be called when you hold the BQL, or
>>>> for BlockDriverStates in an iothread the AioContext lock.
>>>>
>>>> Apparently dropping the BQL in migration code was introduced in Paolo's
>>>> commit 9b095037527.
>>>
>>> Damn.  I reviewed it, so I am as guilty as the author.
>>> 10 years later without problems I will not blame that patch.
>>>
>>> I guess we changed something else that broke doing it without the lock.
>>>
>>> But no, I still don't have suggestions/ideas.
>>>
>> 
>> I do feel like the issue might be very difficult to trigger under normal
>> circumstances. Depending on the configuration and what you do in the
>> guest, aio_poll in a vCPU thread does not happen often and I imagine
>> snapshot-save is also not a super frequent operation for most people. It
>> still takes me a while to trigger the issue by issuing lots of pflash
>> writes and running snapshot-save in a loop, I'd guess about 30-60
>> snapshots. Another reason might be that generated co-wrappers were less
>> common in the past?
>> 
>>>> I'm not sure what this was supposed to improve in
>>>> the case of snapshots because the VM is stopped anyway.
>> 
>> Is it? Quoting Juan:> d- snapshots are a completely different beast,
>> that don't really stop
>>>    the guest in the same way at that point, and sometimes it shows in
>>>    this subtle details.
>> 
>>>> Would anything bad happen if we removed the BQL unlock/lock section in
>>>> qemu_savevm_state() again?
>>>
>>> Dunno.
>>>
>>> For what is worth, I can say that it survives migration-test, but don't
>>> ask me why/how/...
>>>
>>> Fiona, can you check if it fixes your troubles?
>>>
>> 
>> Just removing the single section in qemu_savevm_state() breaks even the
>> case where snapshot_save_job_bh() is executed in the main thread,
>> because ram_init_bitmaps() will call qemu_mutex_lock_iothread_impl()
>> which asserts that it's not already locked.
>> 
>> Also removing the lock/unlock pair in ram_init_bitmaps() seems to work. 
>
> Well, after a few more attempts, I got a new failure (running with the
> two changes mentioned above), but it seems to happen later and, at a
> first glance, doesn't seem to be related to the lock anymore:

Can you revert the whole commit:

commmit 9b0950375277467fd74a9075624477ae43b9bb22
Author: Paolo Bonzini <pbonzini@redhat.com>
Date:   Fri Feb 22 17:36:28 2013 +0100

    migration: run setup callbacks out of big lock

Because we are again at:
>
>> Thread 21 "CPU 0/KVM" received signal SIGABRT, Aborted.
>> [Switching to Thread 0x7fd291ffb700 (LWP 136620)]
>> __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
>> 50   ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
>> (gdb) bt
>> #0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
>> #1  0x00007fd2b8b3e537 in __GI_abort () at abort.c:79
>> #2 0x00007fd2b8b3e40f in __assert_fail_base (fmt=0x7fd2b8cb66a8
>> "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
>> assertion=0x557e0ebb7193 "!n->vhost_started", file=0x557e0ebb65f0
>> "../hw/net/virtio-net.c", line=3811, function=<optimized out>) at
>> assert.c:92
>> #3 0x00007fd2b8b4d662 in __GI___assert_fail
>> (assertion=0x557e0ebb7193 "!n->vhost_started", file=0x557e0ebb65f0
>> "../hw/net/virtio-net.c", line=3811, function=0x557e0ebb7740
>> <__PRETTY_FUNCTION__.3> "virtio_net_pre_save") at assert.c:101
>> #4  0x0000557e0e6c21c9 in virtio_net_pre_save (opaque=0x557e12c480b0) at 
>> ../hw/net/virtio-net.c:3811
>> #5 0x0000557e0e4f63ee in vmstate_save_state_v (f=0x7fd28442f730,
>> vmsd=0x557e0efb53c0 <vmstate_virtio_net>, opaque=0x557e12c480b0,
>> vmdesc=0x7fd2840978e0, version_id=11) at ../migration/vmstate.c:330
>> #6 0x0000557e0e4f6390 in vmstate_save_state (f=0x7fd28442f730,
>> vmsd=0x557e0efb53c0 <vmstate_virtio_net>, opaque=0x557e12c480b0,
>> vmdesc_id=0x7fd2840978e0) at ../migration/vmstate.c:318
>> #7 0x0000557e0e51c80a in vmstate_save (f=0x7fd28442f730,
>> se=0x557e12c4c430, vmdesc=0x7fd2840978e0) at
>> ../migration/savevm.c:1000
>> #8 0x0000557e0e51d942 in
>> qemu_savevm_state_complete_precopy_non_iterable (f=0x7fd28442f730,
>> in_postcopy=false, inactivate_disks=false) at
>> ../migration/savevm.c:1463
>> #9 0x0000557e0e51db33 in qemu_savevm_state_complete_precopy
>> (f=0x7fd28442f730, iterable_only=false, inactivate_disks=false) at
>> ../migration/savevm.c:1529
>> #10 0x0000557e0e51df3d in qemu_savevm_state (f=0x7fd28442f730, 
>> errp=0x7fd28425e1f8) at ../migration/savevm.c:1635
>> #11 0x0000557e0e520548 in save_snapshot (name=0x7fd28425e2b0
>> "snap0", overwrite=false, vmstate=0x7fd284479c20 "scsi0",
>> has_devices=true, devices=0x7fd284097920, errp=0x7fd28425e1f8) at
>> ../migration/savevm.c:2952
>> #12 0x0000557e0e520f9b in snapshot_save_job_bh (opaque=0x7fd28425e130) at 
>> ../migration/savevm.c:3251

Here we are
aio_bh_call()

ends calling snapshot_save_job_bh, and it didn't end well.

It appears that there is not an easy way to warantee that snapshot code
is only run on the main io thread, so my only other suggestion right now
is that you check snapshot_save_job_bh() and see if it ever happens on a
non-vcpu thread when you get the test to run correctly.

Later, Juan.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]