qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 2/2] block: disable I/O throttling on sync ap


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH v2 2/2] block: disable I/O throttling on sync api
Date: Fri, 30 Mar 2012 08:13:27 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

On Tue, Mar 27, 2012 at 07:43:17PM +0800, address@hidden wrote:
> From: Zhi Yong Wu <address@hidden>
> 
> Signed-off-by: Stefan Hajnoczi <address@hidden>
> Signed-off-by: Zhi Yong Wu <address@hidden>
> ---
>  block.c |   10 ++++++++++
>  1 files changed, 10 insertions(+), 0 deletions(-)

I tested this successfully with if=virtio and if=ide.

> diff --git a/block.c b/block.c
> index 1fbf4dd..f0b4f38 100644
> --- a/block.c
> +++ b/block.c
> @@ -1477,6 +1477,12 @@ static int bdrv_rw_co(BlockDriverState *bs, int64_t 
> sector_num, uint8_t *buf,
> 
>      qemu_iovec_init_external(&qiov, &iov, 1);
> 

Please add a comment explaining that synchronous I/O cannot be throttled
because timers don't work in that context.

> +    if (bs->io_limits_enabled) {
> +        fprintf(stderr, "Disabling I/O throttling on '%s' due "
> +                        "to synchronous I/O.\n", bdrv_get_device_name(bs));
> +        bdrv_io_limits_disable(bs);
> +    }
> +
>      if (qemu_in_coroutine()) {
>          /* Fast-path if already in coroutine context */
>          bdrv_rw_co_entry(&rwco);
> @@ -1983,10 +1989,14 @@ static int guess_disk_lchs(BlockDriverState *bs,
>      struct partition *p;
>      uint32_t nr_sects;
>      uint64_t nb_sectors;
> +    bool enabled;
> 
>      bdrv_get_geometry(bs, &nb_sectors);
> 
> +    enabled = bs->io_limits_enabled;
> +    bs->io_limits_enabled = false;
>      ret = bdrv_read(bs, 0, buf, 1);
> +    bs->io_limits_enabled = enabled;

Please add a comment explaining that this function is called during
startup, even for storage interfaces that use asynchronous I/O
functions.  Therefore we explicitly disable throttling *temporarily*,
not permanently, here.

Stefan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]