qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [raw] Guest stuck during live live-migration


From: Kevin Wolf
Subject: Re: [raw] Guest stuck during live live-migration
Date: Mon, 23 Nov 2020 13:25:26 +0100

[ Cc: qemu-block ]

Am 23.11.2020 um 10:36 hat Quentin Grolleau geschrieben:
> Hello,
> 
> In our company, we are hosting a large number of Vm, hosted behind Openstack 
> (so libvirt/qemu).
> A large majority of our Vms are runnign with local data only, stored on NVME, 
> and most of them are RAW disks.
> 
> With Qemu 4.0 (can be even with older version) we see strange  live-migration 
> comportement:

First of all, 4.0 is relatively old. Generally it is worth retrying with
the most recent code (git master or 5.2.0-rc2) before having a closer
look at problems, because it is frustrating to spend considerable time
debugging an issue and then find out it has already been fixed a year
ago.

>     - some Vms live migrate at very high speed without issue (> 6 Gbps)
>     - some Vms are running correctly, but migrating at a strange low speed 
> (3Gbps)
>     - some Vms are migrating at a very low speed (1Gbps, sometime less) and 
> during the migration the guest is completely I/O stuck
> 
> When this issue happen the VM is completly block, iostat in the Vm show us a 
> latency of 30 secs

Can you get the stack backtraces of all QEMU threads while the VM is
blocked (e.g. with gdb or pstack)?

> First we thought it was related to an hardware issue we check it, we 
> comparing different hardware, but no issue where found there
> 
> So one of my colleague had the idea to limit with "tc" the bandwidth on the 
> interface the migration was done, and it worked the VM didn't lose any ping 
> nor being  I/O  stuck
> Important point : Once the Vm have been migrate (with the limitation ) one 
> time, if we migrate it again right after, the migration will be done at full 
> speed (8-9Gb/s) without freezing the Vm

Since you say you're using local storage, I assume that you're doing
both a VM live migration and storage migration at the same time. These
are separate connections, storage is migrated using a NBD connection.

Did you limit the bandwith for both connections, or if it was just one
of them, which one was it?

> It only happen on existing VM, we tried to replicate with a fresh instance 
> with exactly the same spec and nothing was happening
> 
> We tried to replicate the workload inside the VM but there was no way to 
> replicate the case. So it was not related to the workload nor to the server 
> that hosts the Vm
> 
> So we thought about the disk of the instance : the raw file.
> 
> We also tried to strace -c the process during the live-migration and it was 
> doing a lot of "lseek"
> 
> and we found this :
> https://lists.gnu.org/archive/html/qemu-devel/2017-02/msg00462.html

This case is different in that it used qcow2 (which should behave much
better today).

It also used ZFS, which you didn't mention. Is the problematic image
stored on ZFS? If not, which filesystem is it?

> So i rebuilt Qemu with this patch and the live-migration went well, at high 
> speed and with no VM freeze
> ( https://github.com/qemu/qemu/blob/master/block/file-posix.c#L2601 )
> 
> Do you have a way to avoid the "lseek" mechanism as it consumes more 
> resources to find the holes in the disk and don't let any for the VM ?

If you can provide the stack trace during the hang, we might be able to
tell why we're even trying to find holes.

Please also provide your QEMU command line.

At the moment, my assumption is that this is during a mirror block job
which is migrating the disk to your destination server. Not looking for
holes would mean that a sparse source file would become fully allocated
on the destination, which is usually not wanted (also we would
potentially transfer a lot more data over the networkj).

Can you give us a snippet from your strace that shows the individual
lseek syscalls? Depending on which ranges are queried, maybe we could
optimise things by caching the previous result.

Also, a final remark, I know of some cases (on XFS) where lseeks were
slow because the image file was heavily fragmented. Defragmenting the
file resolved the problem, so this may be another thing to try.

On XFS, newer QEMU versions set an extent size hint on newly created
image files (during qemu-img create), which can reduce fragmentation
considerably.

Kevin

> Server hosting the VM :
>     - Bi-Xeon hosts With NVME storage and 10 Go Network card
>     - Qemu 4.0 And Libvirt 5.4
>     - Kernel 4.18.0.25
> 
> Guest having the issue :
>     - raw image with Debian 8
> 
> Here the qemu img on the disk :
> > qemu-img info disk
> image: disk
> file format: raw
> virtual size: 400G (429496729600 bytes)
> disk size: 400G
> 
> 
> Quentin GROLLEAU
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]