qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH] qcow2: Fix race in cache invalidation


From: Alexey Kardashevskiy
Subject: Re: [Qemu-devel] [RFC PATCH] qcow2: Fix race in cache invalidation
Date: Sun, 28 Sep 2014 21:14:13 +1000
User-agent: Mozilla/5.0 (X11; Linux i686 on x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.1.2

On 09/26/2014 12:05 AM, Alexey Kardashevskiy wrote:
> On 09/25/2014 10:39 PM, Kevin Wolf wrote:
>> Am 25.09.2014 um 14:29 hat Alexey Kardashevskiy geschrieben:
>>> On 09/25/2014 08:20 PM, Kevin Wolf wrote:
>>>> Am 25.09.2014 um 11:55 hat Alexey Kardashevskiy geschrieben:
>>>>> Right. Cool. So is below what was suggested? I am doublechecking as it 
>>>>> does
>>>>> not solve the original issue - the bottomhalf is called first and then
>>>>> nbd_trip() crashes in qcow2_co_flush_to_os().
>>>>>
>>>>> diff --git a/block.c b/block.c
>>>>> index d06dd51..1e6dfd1 100644
>>>>> --- a/block.c
>>>>> +++ b/block.c
>>>>> @@ -5037,20 +5037,22 @@ void bdrv_invalidate_cache(BlockDriverState *bs,
>>>>> Error **errp)
>>>>>      if (local_err) {
>>>>>          error_propagate(errp, local_err);
>>>>>          return;
>>>>>      }
>>>>>
>>>>>      ret = refresh_total_sectors(bs, bs->total_sectors);
>>>>>      if (ret < 0) {
>>>>>          error_setg_errno(errp, -ret, "Could not refresh total sector 
>>>>> count");
>>>>>          return;
>>>>>      }
>>>>> +
>>>>> +    bdrv_drain_all();
>>>>>  }
>>>>
>>>> Try moving the bdrv_drain_all() call to the top of the function (at
>>>> least it must be called before bs->drv->bdrv_invalidate_cache).
>>>
>>>
>>> Ok, I did. Did not help.
>>>
>>>
>>>>
>>>>> +static QEMUBH *migration_complete_bh;
>>>>> +static void process_incoming_migration_complete(void *opaque);
>>>>> +
>>>>>  static void process_incoming_migration_co(void *opaque)
>>>>>  {
>>>>>      QEMUFile *f = opaque;
>>>>> -    Error *local_err = NULL;
>>>>>      int ret;
>>>>>
>>>>>      ret = qemu_loadvm_state(f);
>>>>>      qemu_fclose(f);
>>>>
>>>> Paolo suggested to move eveything starting from here, but as far as I
>>>> can tell, leaving the next few lines here shouldn't hurt.
>>>
>>>
>>> Ouch. I was looking at wrong qcow2_fclose() all this time :)
>>> Aaaany what you suggested did not help -
>>> bdrv_co_flush() calls qemu_coroutine_yield() while this BH is being
>>> executed and the situation is still the same.
>>
>> Hm, do you have a backtrace? The idea with the BH was that it would be
>> executed _outside_ coroutine context and therefore wouldn't be able to
>> yield. If it's still executed in coroutine context, it would be
>> interesting to see who that caller is.
> 
> Like this?
> process_incoming_migration_complete
> bdrv_invalidate_cache_all
> bdrv_drain_all
> aio_dispatch
> node->io_read (which is nbd_read)
> nbd_trip
> bdrv_co_flush
> [...]


Ping? I do not know how to understand this backtrace - in fact, in gdb at
the moment of crash I only see traces up to nbd_trip and
coroutine_trampoline (below). What is the context here then?...


Program received signal SIGSEGV, Segmentation fault.
0x000000001050a8d4 in qcow2_cache_flush (bs=0x100363531a0, c=0x0) at
/home/alexey/p/qemu/block/qcow2-cache.c:174
(gdb) bt
#0  0x000000001050a8d4 in qcow2_cache_flush (bs=0x100363531a0, c=0x0) at
/home/alexey/p/qemu/block/qcow2-cache.c:174
#1  0x00000000104fbc4c in qcow2_co_flush_to_os (bs=0x100363531a0) at
/home/alexey/p/qemu/block/qcow2.c:2162
#2  0x00000000104c7234 in bdrv_co_flush (bs=0x100363531a0) at
/home/alexey/p/qemu/block.c:4978
#3  0x00000000104b7e68 in nbd_trip (opaque=0x1003653e530) at
/home/alexey/p/qemu/nbd.c:1260
#4  0x00000000104d7d84 in coroutine_trampoline (i0=0x100, i1=0x36549850) at
/home/alexey/p/qemu/coroutine-ucontext.c:118
#5  0x000000804db01a9c in .__makecontext () from /lib64/libc.so.6
#6  0x0000000000000000 in ?? ()



-- 
Alexey



reply via email to

[Prev in Thread] Current Thread [Next in Thread]