[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] block: Removed coroutine ownership assumption

From: Peter Crosthwaite
Subject: Re: [Qemu-devel] [RFC] block: Removed coroutine ownership assumption
Date: Mon, 2 Jul 2012 20:57:53 +1000

On Mon, Jul 2, 2012 at 8:52 PM, Kevin Wolf <address@hidden> wrote:
> Am 02.07.2012 12:18, schrieb Peter Crosthwaite:
>> On Mon, Jul 2, 2012 at 8:01 PM, Kevin Wolf <address@hidden> wrote:
>>> Am 02.07.2012 11:42, schrieb Peter Crosthwaite:
>>>> On Mon, Jul 2, 2012 at 7:04 PM, Kevin Wolf <address@hidden> wrote:
>>>>> Am 02.07.2012 10:57, schrieb Peter Crosthwaite:
>>>>>> No conditional on the qemu_coroutine_create. So it will always create
>>>>>> a new coroutine for its work which will solve my problem. All I need
>>>>>> to do is pump events once at the end of machine model creation. Any my
>>>>>> coroutines will never yield or get queued by block/AIO. Sound like a
>>>>>> solution?
>>>>> If you don't need the read data in your initialisation code,
>>>> definately not :) Just as long as the read data is there by the time
>>>> the machine goes live.  Whats the current policy with bdrv_read()ing
>>>> from init functions anyway? Several devices in qemu have init
>>>> functions that read the entire storage into a buffer (then the guest
>>>> just talks to the buffer rather than the backing store).
>>> Reading from block devices during device initialisation breaks
>>> migration, so I'd like to see it go away wherever possible. Reading in
>>> the whole image file doesn't sound like something for which a good
>>> excuse exists,
>> Makes sense for small devices on embedded platforms. You end up with a
>> very simple and clean device model. The approach is totally broken for
>> HDDs but it does make some sense for the little embedded flashes where
>> you can get away with caching the entire device storage in RAM for the
>> lifetime of the system.
> It kind of works for read-only devices, it's just ugly there. With
> writeable devices it makes the VM unmigratable.

Isnt it just a simple case of syncing the buffer with the backing
store on pre_save?


>>> you can do that as well during the first access.
>> Kind of painful though to change this for a lot of existing devices.
>> Its a reasonable policy for new devices, but can we just fix the
>> init->bdrv_read() calls in place for the moment?
> Sure, but you asked for the policy and the policy is "only with good
> reasons". ;-)
>>>> Pflash (pflash_cfi_01.c) is the device that is causing me interference
>>>> here and it works exactly like this. If we make the bdrv_read() aio
>>>> though, how do you ensure that it has completed before the guest talks
>>>> to the device? Will this just happen at the end of machine_init
>>>> anyways? Can we put a one liner in the machine init framework that
>>>> pumps all AIO events then just mass covert all these bdrv_reads (in
>>>> init functions) to bdrv_aio_read with a nop completion callback?
>>> The initialisation function of the device can wait at its end for all
>>> AIOs to return.
>> You lose the performance increase discussed below however, as instead
>> of the init function returning to continue on with the machine init,
>> you block on disk IO.
> Do you think it really matters? I guess it might if you have two such
> devices that take each a second or so to load, but otherwise there isn't
> much to parallelise.
> Kevin

reply via email to

[Prev in Thread] Current Thread [Next in Thread]