[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Bug 1869006] Re: PCIe cards passthrough to TCG guest works on 2GB of gu
From: |
costinel |
Subject: |
[Bug 1869006] Re: PCIe cards passthrough to TCG guest works on 2GB of guest memory but fails on 4GB (vfio_dma_map invalid arg) |
Date: |
Wed, 01 Jul 2020 00:21:27 -0000 |
I am experiencing the same behaviour for x86_64 guest on x86_64 host to
which I'm attempting to efi boot (not hotplug) with a pcie gpu
passthrough
This discussion (https://www.spinics.net/lists/iommu/msg40613.html)
suggests a change in drivers/iommu/intel-iommu.c but it appears that in
the kernel I tried, the change it is already implemented (linux-
image-5.4.0-39-generic)
hardware is a hp microserver gen8 with conrep physical slot excluded in
bios (https://www.jimmdenton.com/proliant-intel-dpdk/) and the kernel is
rebuild with rmrr patch (https://forum.proxmox.com/threads/compile-
proxmox-ve-with-patched-intel-iommu-driver-to-remove-rmrr-check.36374/)
also an user complains that on the same hardware it used to work with
kernel 5.3 + rmrr patch (https://forum.level1techs.com/t/looking-for-
vfio-wizards-to-troubleshoot-error-vfio-dma-map-22/153539) but it
stopped working on the 5.4 kernel.
is this the same issue I'm observing? my qemu complains with the similar
message:
-device vfio-pci,host=07:00.0,id=hostdev0,bus=pci.4,addr=0x0:
vfio_dma_map(0x556eb57939f0, 0xc0000, 0x3ff40000, 0x7f6fc7ec0000) = -22
(Invalid argument)
/sys/kernel/iommu_groups/1/reserved_regions shows:
0x00000000000e8000 0x00000000000e8fff direct
0x00000000000f4000 0x00000000000f4fff direct
0x00000000d5f7e000 0x00000000d5f94fff direct
0x00000000fee00000 0x00000000feefffff msi
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1869006
Title:
PCIe cards passthrough to TCG guest works on 2GB of guest memory but
fails on 4GB (vfio_dma_map invalid arg)
Status in QEMU:
New
Bug description:
During one meeting coworker asked "did someone tried to passthrough
PCIe card to other arch guest?" and I decided to check it.
Plugged SATA and USB3 controllers into spare slots on mainboard and
started playing. On 1GB VM instance it worked (both cold- and hot-
plugged). On 4GB one it did not:
Błąd podczas uruchamiania domeny: internal error: process exited while
connecting to monitor: 2020-03-25T13:43:39.107524Z qemu-system-aarch64: -device
vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: VFIO_MAP_DMA: -22
2020-03-25T13:43:39.107560Z qemu-system-aarch64: -device
vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: vfio 0000:29:00.0:
failed to setup container for group 28: memory listener initialization failed:
Region mach-virt.ram: vfio_dma_map(0x563169753c80, 0x40000000, 0x100000000,
0x7fb2a3e00000) = -22 (Invalid argument)
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in
cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 111, in tmpcb
callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/object/libvirtobject.py", line
66, in newfn
ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/object/domain.py", line 1279, in
startup
self._backend.create()
File "/usr/lib64/python3.8/site-packages/libvirt.py", line 1234, in create
if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirt.libvirtError: internal error: process exited while connecting to
monitor: 2020-03-25T13:43:39.107524Z qemu-system-aarch64: -device
vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: VFIO_MAP_DMA: -22
2020-03-25T13:43:39.107560Z qemu-system-aarch64: -device
vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: vfio 0000:29:00.0:
failed to setup container for group 28: memory listener initialization failed:
Region mach-virt.ram: vfio_dma_map(0x563169753c80, 0x40000000, 0x100000000,
0x7fb2a3e00000) = -22 (Invalid argument)
I played with memory and 3054 MB is maximum value possible to boot VM with
coldplugged host PCIe cards.
To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1869006/+subscriptions