qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: bad qemu savevm to /dev/null performance (600 MiB/s max) (Was: Re: s


From: Claudio Fontana
Subject: Re: bad qemu savevm to /dev/null performance (600 MiB/s max) (Was: Re: starting to look at qemu savevm performance, a first regression detected)
Date: Wed, 9 Mar 2022 14:16:22 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0

On 3/9/22 12:43 PM, Dr. David Alan Gilbert wrote:
> * Claudio Fontana (cfontana@suse.de) wrote:
>> On 3/7/22 1:28 PM, Dr. David Alan Gilbert wrote:
>>> * Claudio Fontana (cfontana@suse.de) wrote:
>>>> On 3/7/22 1:20 PM, Daniel P. Berrangé wrote:
>>>>> On Mon, Mar 07, 2022 at 01:09:55PM +0100, Claudio Fontana wrote:
>>>>>> On 3/7/22 1:00 PM, Daniel P. Berrangé wrote:
>>>>>>> On Mon, Mar 07, 2022 at 12:19:22PM +0100, Claudio Fontana wrote:
>>>>>>>> On 3/7/22 10:51 AM, Daniel P. Berrangé wrote:
>>>>>>>>> On Mon, Mar 07, 2022 at 10:44:56AM +0100, Claudio Fontana wrote:
>>>>>>>>>> Hello Daniel,
>>>>>>>>>>
>>>>>>>>>> On 3/7/22 10:27 AM, Daniel P. Berrangé wrote:
>>>>>>>>>>> On Sat, Mar 05, 2022 at 02:19:39PM +0100, Claudio Fontana wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Hello all,
>>>>>>>>>>>>
>>>>>>>>>>>> I have been looking at some reports of bad qemu savevm performance 
>>>>>>>>>>>> in large VMs (around 20+ Gb),
>>>>>>>>>>>> when used in libvirt commands like:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> virsh save domain /dev/null
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I have written a simple test to run in a Linux 
>>>>>>>>>>>> centos7-minimal-2009 guest, which allocates and touches 20G mem.
>>>>>>>>>>>>
>>>>>>>>>>>> With any qemu version since around 2020, I am not seeing more than 
>>>>>>>>>>>> 580 Mb/Sec even in the most ideal of situations.
>>>>>>>>>>>>
>>>>>>>>>>>> This drops to around 122 Mb/sec after commit: 
>>>>>>>>>>>> cbde7be900d2a2279cbc4becb91d1ddd6a014def .
>>>>>>>>>>>>
>>>>>>>>>>>> Here is the bisection for this particular drop in throughput:
>>>>>>>>>>>>
>>>>>>>>>>>> commit cbde7be900d2a2279cbc4becb91d1ddd6a014def (HEAD, 
>>>>>>>>>>>> refs/bisect/bad)
>>>>>>>>>>>> Author: Daniel P. Berrangé <berrange@redhat.com>
>>>>>>>>>>>> Date:   Fri Feb 19 18:40:12 2021 +0000
>>>>>>>>>>>>
>>>>>>>>>>>>     migrate: remove QMP/HMP commands for speed, downtime and cache 
>>>>>>>>>>>> size
>>>>>>>>>>>>     
>>>>>>>>>>>>     The generic 'migrate_set_parameters' command handle all types 
>>>>>>>>>>>> of param.
>>>>>>>>>>>>     
>>>>>>>>>>>>     Only the QMP commands were documented in the deprecations 
>>>>>>>>>>>> page, but the
>>>>>>>>>>>>     rationale for deprecating applies equally to HMP, and the 
>>>>>>>>>>>> replacements
>>>>>>>>>>>>     exist. Furthermore the HMP commands are just shims to the QMP 
>>>>>>>>>>>> commands,
>>>>>>>>>>>>     so removing the latter breaks the former unless they get 
>>>>>>>>>>>> re-implemented.
>>>>>>>>>>>>     
>>>>>>>>>>>>     Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>>>>>>>>>>>>     Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
>>>>>>>>>>>
>>>>>>>>>>> That doesn't make a whole lot of sense as a bisect result.
>>>>>>>>>>> How reliable is that bisect end point ? Have you bisected
>>>>>>>>>>> to that point more than once ?
>>>>>>>>>>
>>>>>>>>>> I did run through the bisect itself only once, so I'll double check 
>>>>>>>>>> that.
>>>>>>>>>> The results seem to be reproducible almost to the second though, a 
>>>>>>>>>> savevm that took 35 seconds before the commit takes 2m 48 seconds 
>>>>>>>>>> after.
>>>>>>>>>>
>>>>>>>>>> For this test I am using libvirt v6.0.0.
>>>>>>>
>>>>>>> I've just noticed this.  That version of libvirt is 2 years old and
>>>>>>> doesn't have full support for migrate_set_parameters.
>>>>>>>
>>>>>>>
>>>>>>>> 2022-03-07 10:47:20.145+0000: 134386: info : qemuMonitorIOWrite:452 : 
>>>>>>>> QEMU_MONITOR_IO_WRITE: mon=0x7fa4380028a0 
>>>>>>>> buf={"execute":"migrate_set_speed","arguments":{"value":9223372036853727232},"id":"libvirt-19"}^M
>>>>>>>>  len=93 ret=93 errno=0
>>>>>>>> 2022-03-07 10:47:20.146+0000: 134386: info : 
>>>>>>>> qemuMonitorJSONIOProcessLine:240 : QEMU_MONITOR_RECV_REPLY: 
>>>>>>>> mon=0x7fa4380028a0 reply={"id": "libvirt-19", "error": {"class": 
>>>>>>>> "CommandNotFound", "desc": "The command migrate_set_speed has not been 
>>>>>>>> found"}}
>>>>>>>> 2022-03-07 10:47:20.147+0000: 134391: error : 
>>>>>>>> qemuMonitorJSONCheckError:412 : internal error: unable to execute QEMU 
>>>>>>>> command 'migrate_set_speed': The command migrate_set_speed has not 
>>>>>>>> been found
>>>>>>>
>>>>>>> We see the migrate_set_speed failing and libvirt obviously ignores that
>>>>>>> failure.
>>>>>>>
>>>>>>> In current libvirt migrate_set_speed is not used as it properly
>>>>>>> handles migrate_set_parameters AFAICT.
>>>>>>>
>>>>>>> I think you just need to upgrade libvirt if you want to use this
>>>>>>> newer QEMU version
>>>>>>>
>>>>>>> Regards,
>>>>>>> Daniel
>>>>>>>
>>>>>>
>>>>>> Got it, this explains it, sorry for the noise on this.
>>>>>>
>>>>>> I'll continue to investigate the general issue of low throughput with 
>>>>>> virsh save / qemu savevm .
>>>>>
>>>>> BTW, consider measuring with the --bypass-cache flag to virsh save.
>>>>> This causes libvirt to use a I/O helper that uses O_DIRECT when
>>>>> saving the image. This should give more predictable results by
>>>>> avoiding the influence of host I/O cache which can be in a differnt
>>>>> state of usage each time you measure.  It was also intended that
>>>>> by avoiding hitting cache, saving the memory image of a large VM
>>>>> will not push other useful stuff out of host I/O  cache which can
>>>>> negatively impact other running VMs.
>>>>>
>>>>> Also it is possible to configure compression on the libvirt side
>>>>> which may be useful if you have spare CPU cycles, but your storage
>>>>> is slow. See 'save_image_format' in the /etc/libvirt/qemu.conf
>>>>>
>>>>> With regards,
>>>>> Daniel
>>>>>
>>>>
>>>> Hi Daniel, thanks for these good info,
>>>>
>>>> regarding slow storage, for these tests I am saving to /dev/null to avoid 
>>>> having to take storage into account
>>>> (and still getting low bandwidth unfortunately) so I guess compression is 
>>>> out of the question.
>>>
>>> What type of speeds do you get if you try a migrate to a netcat socket?
>>
>> much faster apparently, 30 sec savevm vs 7 seconds for migration to a netcat 
>> socket sent to /dev/null.
>>
>> nc -l -U /tmp/savevm.socket
>>
>> virsh suspend centos7
>> Domain centos7 suspended
>>
>> virsh qemu-monitor-command --cmd '{ "execute": "migrate", "arguments": { 
>> "uri": "unix:///tmp/savevm.socket" } }' centos7
>>
>> virt97:/mnt # virsh qemu-monitor-command --cmd '{ "execute": "query-migrate" 
>> }' centos7
>> {"return":{"blocked":false,"status":"completed","setup-time":118,"downtime":257,"total-time":7524,"ram":{"total":32213049344,"postcopy-requests":0,"dirty-sync-count":3,"multifd-bytes":0,"pages-per-second":1057530,"page-size":4096,"remaining":0,"mbps":24215.572437483122,"transferred":22417172290,"duplicate":2407520,"dirty-pages-rate":0,"skipped":0,"normal-bytes":22351847424,"normal":5456994}},"id":"libvirt-438"}
>>
>> virt97:/mnt # virsh qemu-monitor-command --cmd '{ "execute": 
>> "query-migrate-parameters" }' centos7
>> {"return":{"cpu-throttle-tailslow":false,"xbzrle-cache-size":67108864,"cpu-throttle-initial":20,"announce-max":550,"decompress-threads":2,"compress-threads":8,"compress-level":0,"multifd-channels":8,"multifd-zstd-level":1,"announce-initial":50,"block-incremental":false,"compress-wait-thread":true,"downtime-limit":300,"tls-authz":"","multifd-compression":"none","announce-rounds":5,"announce-step":100,"tls-creds":"","multifd-zlib-level":1,"max-cpu-throttle":99,"max-postcopy-bandwidth":0,"tls-hostname":"","throttle-trigger-threshold":50,"max-bandwidth":9223372036853727232,"x-checkpoint-delay":20000,"cpu-throttle-increment":10},"id":"libvirt-439"}
>>
>>
>> I did also a run with multifd-channels:1 instead of 8, if it matters:
> 
> I suspect you haven't actually got multifd enabled ( check
> query-migrate-capabilities ?).
>>
>> virt97:/mnt # virsh qemu-monitor-command --cmd '{ "execute": "query-migrate" 
>> }' centos7
>> {"return":{"blocked":false,"status":"completed","setup-time":119,"downtime":260,"total-time":8601,"ram":{"total":32213049344,"postcopy-requests":0,"dirty-sync-count":3,"multifd-bytes":0,"pages-per-second":908820,"page-size":4096,"remaining":0,"mbps":21141.861157274227,"transferred":22415264188,"duplicate":2407986,"dirty-pages-rate":0,"skipped":0,"normal-bytes":22349938688,"normal":5456528}},"id":"libvirt-453"}
>>
>> virt97:/mnt # virsh qemu-monitor-command --cmd '{ "execute": 
>> "query-migrate-parameters" }' centos7
>> {"return":{"cpu-throttle-tailslow":false,"xbzrle-cache-size":67108864,"cpu-throttle-initial":20,"announce-max":550,"decompress-threads":2,"compress-threads":8,"compress-level":0,"multifd-channels":1,"multifd-zstd-level":1,"announce-initial":50,"block-incremental":false,"compress-wait-thread":true,"downtime-limit":300,"tls-authz":"","multifd-compression":"none","announce-rounds":5,"announce-step":100,"tls-creds":"","multifd-zlib-level":1,"max-cpu-throttle":99,"max-postcopy-bandwidth":0,"tls-hostname":"","throttle-trigger-threshold":50,"max-bandwidth":9223372036853727232,"x-checkpoint-delay":20000,"cpu-throttle-increment":10},"id":"libvirt-454"}
>>
>>
>> Still we are in the 20 Gbps range, or around 2560 MiB/s, way faster than 
>> savevm which does around 600 MiB/s when the wind is in its favor..
> qemu-1646827174
> Yeh that's what I'd hope for off a decent CPU; hmm there's not that much
> savevm specific is there?


Hmm not sure. I have a hunch, is the difference just more threads being used to 
transfer the data?

Is the migration path creating threads, while savevm doesn't? Hmm...

Thanks,

Claudio














reply via email to

[Prev in Thread] Current Thread [Next in Thread]