[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] VM I/O performance drops dramatically during storage mi
From: |
Stefan Hajnoczi |
Subject: |
Re: [Qemu-devel] VM I/O performance drops dramatically during storage migration with drive-mirror |
Date: |
Fri, 1 Jun 2018 13:10:46 +0100 |
User-agent: |
Mutt/1.9.5 (2018-04-13) |
On Mon, May 28, 2018 at 06:17:10PM +0800, Chunguang Li wrote:
> Hi, everyone.
>
>
>
>
> Recently I am doing some tests on the VM storage+memory migration with
> KVM/QEMU/libvirt. I use the following migrate command through virsh: "virsh
> migrate --live --copy-storage-all --verbose vm1
> qemu+ssh://192.168.1.91/system tcp://192.168.1.91". I have checked the
> libvirt debug output, and make sure that the drive-mirror + NBD migration
> method is used.
>
> Inside the VM, I use an I/O benchmark (Iometer) to generate an oltp workload.
> I record the I/O performance (IOPS) before/during/after migration. When the
> migration begins, the IOPS dropped by 30%-40%. This is reasonable, because
> the migration I/O competes with the workload I/O. However, during almost the
> last period of migration (which is 66s in my case), the IOPS dropped
> dramatically, from about 170 to less than 10. I also show the figure of this
> experiment in the attachment of this email.
>
>
>
>
> I want to figure out what results in this period with very low IOPS. First, I
> added some printf()s in the QEMU code, and knew that, this period occurs just
> before the memory migration phase. (BTW, the memory migration is very fast,
> which is just about 5s.) So I think this period should be the last phase of
> the "drive-mirror" process of QEMU. So then I tried to read the code of
> "drive-mirror" in QEMU, but failed to understand it very well.
>
>
>
>
> Does anybody know what may lead to this period with very low IOPS? Thank you
> very much.
IOPS dropped from 170 to less than 10. That could be because QEMU or
the storage device is slow at completing requests due to the other
activity (drive-mirror). But it could also be because the guest simply
isn't submitting many I/O requests!
So I think the first step is to determine how much I/O the guest is
submitting. There are several ways of doing this. You can enable the
virtio_blk_handle_write and virtio_blk_handle_read trace events if you
are using virtio-blk (see docs/devel/tracing.txt). Or you could use
iostat(1) inside the guest to observe the number of completed requests +
queue size.
If the guest is not submitting the expected number of requests then
you'll need to investigate why Iometer is starved.
> Some details of this experiment:
> The VM disk image file is 30GB (format = raw,cache=none,aio=native), and
> Iometer operates on an 10GB file inside the VM. The oltp workload consists of
> 33% writes and 67% reads (8KB request size, all random). The VM memory size
> is 4GB, most of which should be zero pages, so the memory migration is very
> fast.
>
>
>
>
> --
> Chunguang Li, Ph.D. Candidate
> Wuhan National Laboratory for Optoelectronics (WNLO)
> Huazhong University of Science & Technology (HUST)
> Wuhan, Hubei Prov., China
>
signature.asc
Description: PGP signature
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- Re: [Qemu-devel] VM I/O performance drops dramatically during storage migration with drive-mirror,
Stefan Hajnoczi <=