qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Logging dirty pages from vhost-net in-kernel with vIOMM


From: Jason Wang
Subject: Re: [Qemu-devel] Logging dirty pages from vhost-net in-kernel with vIOMMU
Date: Thu, 6 Dec 2018 20:44:03 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1


On 2018/12/6 下午8:11, Jintack Lim wrote:
On Thu, Dec 6, 2018 at 2:33 AM Jason Wang <address@hidden> wrote:

On 2018/12/5 下午10:47, Jintack Lim wrote:
On Tue, Dec 4, 2018 at 8:30 PM Jason Wang <address@hidden> wrote:
On 2018/12/5 上午2:37, Jintack Lim wrote:
Hi,

I'm wondering how the current implementation works when logging dirty
pages during migration from vhost-net (in kernel) when used vIOMMU.

I understand how vhost-net logs GPAs when not using vIOMMU. But when
we use vhost with vIOMMU, then shouldn't vhost-net need to log the
translated address (GPA) instead of the address written in the
descriptor (IOVA) ? The current implementation looks like vhost-net
just logs IOVA without translation in vhost_get_vq_desc() in
drivers/vhost/net.c. It seems like QEMU doesn't do any further
translation of the dirty log when syncing.

I might be missing something. Could somebody shed some light on this?
Good catch. It looks like a bug to me. Want to post a patch for this?
Thanks for the confirmation.

What would be a good setup to catch this kind of migration bug? I
tried to observe it in the VM expecting to see network applications
not getting data correctly on the destination, but it was not
successful (i.e. the VM on the destination just worked fine.) I didn't
even see anything going wrong when I disabled the vhost logging
completely without using vIOMMU.

What I did is I ran multiple network benchmarks (e.g. netperf tcp
stream and my own one to check correctness of received data) in a VM
without vhost dirty page logging, and the benchmarks just ran fine in
the destination. I checked the used ring at the time the VM is stopped
in the source for migration, and it had multiple descriptors that is
(probably) not processed in the VM yet. Do you have any insight how it
could just work and what would be a good setup to catch this?

According to past experience, it could be reproduced by doing scp from
host to guest during migration.

Thanks. I actually tried that, but didn't see any problem either - I
copied a large file during migration from host to guest (the copy
continued on the destination), and checked md5 hashes using md5sum,
but the copied file had the same checksum as the one in the host.

Do you recall what kind of symptom you observed when the dirty pages
were not migrated correctly with scp?


Yes,  the point is to make the migration converge before the end of scp (e.g set migration speed to a very big value). If scp end before migration, we won't catch the bug. And it's better to do several rounds of migration during scp.

Anyway, let me try to reproduce it tomorrow.

Thanks



About sending a patch, as Michael suggested, I think it's better for
you to handle this case - this is not my area of expertise, yet :-)

No problem, I will fix this.

Thanks for spotting this issue.


Thanks


Thanks,
Jintack






reply via email to

[Prev in Thread] Current Thread [Next in Thread]