qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] migration/block: Avoid involve into blk_drain t


From: Fam Zheng
Subject: Re: [Qemu-devel] [PATCH] migration/block: Avoid involve into blk_drain too frequently
Date: Wed, 15 Mar 2017 10:57:16 +0800
User-agent: Mutt/1.7.1 (2016-10-04)

On Wed, 03/15 10:28, 858585 jemmy wrote:
> On Tue, Mar 14, 2017 at 11:12 PM, Eric Blake <address@hidden> wrote:
> > On 03/14/2017 02:57 AM, address@hidden wrote:
> >> From: Lidong Chen <address@hidden>
> >>
> >> Increase bmds->cur_dirty after submit io, so reduce the frequency involve 
> >> into blk_drain, and improve the performance obviously when block migration.
> >
> > Long line; please wrap your commit messages, preferably around 70 bytes
> > since 'git log' displays them indented, and it is still nice to read
> > them in an 80-column window.
> >
> > Do you have benchmark numbers to prove the impact of this patch, or even
> > a formula for reproducing the benchmark testing?
> >
> 
> the test result is base on current git master version.
> 
> the xml of guest os:
>     <disk type='file' device='disk'>
>       <driver name='qemu' type='qcow2' cache='none'/>
>       <source 
> file='/instanceimage/ab3ba978-c7a3-463d-a1d0-48649fb7df00/ab3ba978-c7a3-463d-a1d0-48649fb7df00_vda.qcow2'/>
>       <target dev='vda' bus='virtio'/>
>       <alias name='virtio-disk0'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
> function='0x0'/>
>     </disk>
>     <disk type='block' device='disk'>
>       <driver name='qemu' type='raw' cache='none' io='native'/>
>       <source dev='/dev/domu/ab3ba978-c7a3-463d-a1d0-48649fb7df00_vdb'/>
>       <target dev='vdb' bus='virtio'/>
>       <alias name='virtio-disk1'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
> function='0x0'/>
>     </disk>
> 
> i used fio running in guest os.  and the context of  fio configuration is 
> below:
> [randwrite]
> ioengine=libaio
> iodepth=128
> bs=512
> filename=/dev/vdb
> rw=randwrite
> direct=1
> 
> when the vm is not durning migrate, the iops is about 10.7K.
> 
> then i used this command to start migrate virtual machine.
> 
> virsh migrate-setspeed ab3ba978-c7a3-463d-a1d0-48649fb7df00 1000
> virsh migrate --live ab3ba978-c7a3-463d-a1d0-48649fb7df00
> --copy-storage-inc qemu+ssh://10.59.163.38/system
> 
> before apply this patch, during the block dirty save phase, the iops
> in guest os is  only 4.0K, the migrate speed is about 505856 rsec/s.
> after apply this patch, during the block dirty save phase, the iops in
> guest os is is 9.5K. the migrate speed is about 855756 rsec/s.

Thanks, please include these numbers in the commit message too.

Fam



reply via email to

[Prev in Thread] Current Thread [Next in Thread]