qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/6] Move memory listener register to vhost_vdpa_init


From: Lei Yang
Subject: Re: [PATCH 0/6] Move memory listener register to vhost_vdpa_init
Date: Fri, 19 Jan 2024 22:44:54 +0800

QE tested this series patch with regression testing. It will caused
the host kernel crash. Before hit the host kernel crash, qemu will
output error messages [1].When this error message appears and
hotplug/unplug nic is still performed, a kernel crash will be
triggered soon. For the crash info please review the attached file.
[1] qemu output:
failed to write, fd=148, errno=14 (Bad address)
vhost vdpa map fail!
vhost-vdpa: DMA mapping failed, unable to continue
failed to write, fd=148, errno=14 (Bad address)
vhost vdpa map fail!
vhost-vdpa: DMA mapping failed, unable to continue
failed to write, fd=148, errno=14 (Bad address)
vhost vdpa map fail!
vhost-vdpa: DMA mapping failed, unable to continue
failed to write, fd=148, errno=14 (Bad address)
vhost vdpa map fail!
vhost-vdpa: DMA mapping failed, unable to continue
failed to write, fd=148, errno=14 (Bad address)
vhost vdpa map fail!
vhost-vdpa: DMA mapping failed, unable to continue
failed to write, fd=148, errno=14 (Bad address)
vhost vdpa map fail!
vhost-vdpa: DMA mapping failed, unable to continue
failed to write, fd=148, errno=14 (Bad address)
vhost vdpa map fail!
vhost-vdpa: DMA mapping failed, unable to continue
failed to write, fd=148, errno=14 (Bad address)
vhost vdpa map fail!
vhost-vdpa: DMA mapping failed, unable to continue
failed to write, fd=148, errno=14 (Bad address)
vhost vdpa map fail!
vhost-vdpa: DMA mapping failed, unable to continue
failed to write, fd=148, errno=14 (Bad address)

Best Regards
Lei



On Fri, Jan 12, 2024 at 3:02 AM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> Current memory operations like pinning may take a lot of time at the
> destination.  Currently they are done after the source of the migration is
> stopped, and before the workload is resumed at the destination.  This is a
> period where neigher traffic can flow, nor the VM workload can continue
> (downtime).
>
> We can do better as we know the memory layout of the guest RAM at the
> destination from the moment that all devices are initializaed.  So
> moving that operation allows QEMU to communicate the kernel the maps
> while the workload is still running in the source, so Linux can start
> mapping them.
>
> As a small drawback, there is a time in the initialization where QEMU
> cannot respond to QMP etc.  By some testing, this time is about
> 0.2seconds.  This may be further reduced (or increased) depending on the
> vdpa driver and the platform hardware, and it is dominated by the cost
> of memory pinning.
>
> This matches the time that we move out of the called downtime window.
> The downtime is measured as checking the trace timestamp from the moment
> the source suspend the device to the moment the destination starts the
> eight and last virtqueue pair.  For a 39G guest, it goes from ~2.2526
> secs to 2.0949.
>
> Future directions on top of this series may include to move more things ahead
> of the migration time, like set DRIVER_OK or perform actual iterative 
> migration
> of virtio-net devices.
>
> Comments are welcome.
>
> This series is a different approach of series [1]. As the title does not
> reflect the changes anymore, please refer to the previous one to know the
> series history.
>
> [1] 
> https://patchwork.kernel.org/project/qemu-devel/cover/20231215172830.2540987-1-eperezma@redhat.com/
>
> Eugenio Pérez (6):
>   vdpa: check for iova tree initialized at net_client_start
>   vdpa: reorder vhost_vdpa_set_backend_cap
>   vdpa: set backend capabilities at vhost_vdpa_init
>   vdpa: add listener_registered
>   vdpa: reorder listener assignment
>   vdpa: move memory listener register to vhost_vdpa_init
>
>  include/hw/virtio/vhost-vdpa.h |  6 +++
>  hw/virtio/vhost-vdpa.c         | 87 +++++++++++++++++++++-------------
>  net/vhost-vdpa.c               |  4 +-
>  3 files changed, 63 insertions(+), 34 deletions(-)
>
> --
> 2.39.3
>
>

Attachment: vmcore-dmesg.txt
Description: Text document


reply via email to

[Prev in Thread] Current Thread [Next in Thread]