[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v2 2/2] block: bump coroutine pool size for driv
From: |
Markus Armbruster |
Subject: |
Re: [Qemu-devel] [PATCH v2 2/2] block: bump coroutine pool size for drives |
Date: |
Mon, 07 Jul 2014 14:32:02 +0200 |
User-agent: |
Gnus/5.13 (Gnus v5.13) Emacs/24.3 (gnu/linux) |
Stefan Hajnoczi <address@hidden> writes:
> On Fri, Jul 04, 2014 at 12:03:27PM +0200, Markus Armbruster wrote:
>> Stefan Hajnoczi <address@hidden> writes:
>> > @@ -2112,6 +2115,7 @@ void bdrv_detach_dev(BlockDriverState *bs, void *dev)
>> > bs->dev_ops = NULL;
>> > bs->dev_opaque = NULL;
>> > bs->guest_block_size = 512;
>> > + qemu_coroutine_adjust_pool_size(-64);
>> > }
>> >
>> > /* TODO change to return DeviceState * when all users are qdevified */
>>
>> This enlarges the pool regardless of how the device model uses the block
>> layer. Isn't this a bit crude?
>>
>> Have you considered adapting the number of coroutines to actual demand?
>> Within reasonable limits, of course.
>
> I picked the simplest algorithm because I couldn't think of one which is
> clearly better. We cannot predict future coroutine usage so any
> algorithm will have pathological cases.
Dynamically adapting to actual usage arguably involves less predicting
than 64 * #block backends. Grow the pool some when we're running out of
coroutines, shrink it some when it's been underutilized for some time.
> In this case we might as well stick to the simplest implementation.
Keeping it simple is always a weighty argument.