[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v2 2/2] block: bump coroutine pool size for driv
From: |
Markus Armbruster |
Subject: |
Re: [Qemu-devel] [PATCH v2 2/2] block: bump coroutine pool size for drives |
Date: |
Fri, 04 Jul 2014 12:03:27 +0200 |
User-agent: |
Gnus/5.13 (Gnus v5.13) Emacs/24.3 (gnu/linux) |
Stefan Hajnoczi <address@hidden> writes:
> When a BlockDriverState is associated with a storage controller
> DeviceState we expect guest I/O. Use this opportunity to bump the
> coroutine pool size by 64.
>
> This patch ensures that the coroutine pool size scales with the number
> of drives attached to the guest. It should increase coroutine pool
> usage (which makes qemu_coroutine_create() fast) without hogging too
> much memory when fewer drives are attached.
>
> Signed-off-by: Stefan Hajnoczi <address@hidden>
> ---
> block.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/block.c b/block.c
> index f80e2b2..c8379ca 100644
> --- a/block.c
> +++ b/block.c
> @@ -2093,6 +2093,9 @@ int bdrv_attach_dev(BlockDriverState *bs, void *dev)
> }
> bs->dev = dev;
> bdrv_iostatus_reset(bs);
> +
> + /* We're expecting I/O from the device so bump up coroutine pool size */
> + qemu_coroutine_adjust_pool_size(64);
> return 0;
> }
>
> @@ -2112,6 +2115,7 @@ void bdrv_detach_dev(BlockDriverState *bs, void *dev)
> bs->dev_ops = NULL;
> bs->dev_opaque = NULL;
> bs->guest_block_size = 512;
> + qemu_coroutine_adjust_pool_size(-64);
> }
>
> /* TODO change to return DeviceState * when all users are qdevified */
This enlarges the pool regardless of how the device model uses the block
layer. Isn't this a bit crude?
Have you considered adapting the number of coroutines to actual demand?
Within reasonable limits, of course.