qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/1] monitor: increase amount of data for monito


From: Eric Blake
Subject: Re: [Qemu-devel] [PATCH 1/1] monitor: increase amount of data for monitor to read
Date: Tue, 2 May 2017 09:34:55 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.0

On 05/02/2017 08:47 AM, Denis V. Lunev wrote:
> Right now QMP and HMP monitors read 1 byte at a time from the socket, which
> is very inefficient. With 100+ VMs on the host this easily reasults in

s/reasults/results/

> a lot of unnecessary system calls and CPU usage in the system.
> 
> This patch changes the amount of data to read to 4096 bytes, which matches
> buffer size on the channel level. Fortunately, monitor protocol is
> synchronous right now thus we should not face side effects in reality.

Do you have any easy benchmarks or measurements to prove what sort of
efficiencies we get?  (I believe they exist, but quantifying them never
hurts)

> 
> Signed-off-by: Denis V. Lunev <address@hidden>
> CC: Markus Armbruster <address@hidden>
> CC: "Dr. David Alan Gilbert" <address@hidden>
> CC: Eric Blake <address@hidden>
> ---
>  monitor.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/monitor.c b/monitor.c
> index be282ec..00df5d0 100644
> --- a/monitor.c
> +++ b/monitor.c
> @@ -3698,7 +3698,7 @@ static int monitor_can_read(void *opaque)
>  {
>      Monitor *mon = opaque;
>  
> -    return (mon->suspend_cnt == 0) ? 1 : 0;
> +    return (mon->suspend_cnt == 0) ? 4096 : 0;

Is a hard-coded number correct, or should we be asking the channel for
an actual number?

>  }
>  
>  static void handle_qmp_command(JSONMessageParser *parser, GQueue *tokens)
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]