qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Bug 1869006] Re: PCIe cards passthrough to TCG guest works on 2GB of gu


From: Alex Williamson
Subject: [Bug 1869006] Re: PCIe cards passthrough to TCG guest works on 2GB of guest memory but fails on 4GB (vfio_dma_map invalid arg)
Date: Thu, 26 Mar 2020 20:51:48 -0000

Yes, to support this the vfio driver would need to be able to exclude
ranges of guest GPA space.  Recent kernels already expose an IOVA list
via the vfio API.  Clearly, excluding GPA ranges has implications for
hotplug.  On x86 the ranges are pretty consistent, but IIRC differ
between Intel and AMD.  The vfio IOVA list was originally developed for
an ARM use case though, and I imagine there's little or no consistency
there.

Re: pci=nomsi, the reserved range exists regardless of whether MSI is
actually used.

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1869006

Title:
  PCIe cards passthrough to TCG guest works on 2GB of guest memory but
  fails on 4GB (vfio_dma_map invalid arg)

Status in QEMU:
  New

Bug description:
  During one meeting coworker asked "did someone tried to passthrough
  PCIe card to other arch guest?" and I decided to check it.

  Plugged SATA and USB3 controllers into spare slots on mainboard and
  started playing. On 1GB VM instance it worked (both cold- and hot-
  plugged). On 4GB one it did not:

  Błąd podczas uruchamiania domeny: internal error: process exited while 
connecting to monitor: 2020-03-25T13:43:39.107524Z qemu-system-aarch64: -device 
vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: VFIO_MAP_DMA: -22
  2020-03-25T13:43:39.107560Z qemu-system-aarch64: -device 
vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: vfio 0000:29:00.0: 
failed to setup container for group 28: memory listener initialization failed: 
Region mach-virt.ram: vfio_dma_map(0x563169753c80, 0x40000000, 0x100000000, 
0x7fb2a3e00000) = -22 (Invalid argument)

  Traceback (most recent call last):
    File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in 
cb_wrapper
      callback(asyncjob, *args, **kwargs)
    File "/usr/share/virt-manager/virtManager/asyncjob.py", line 111, in tmpcb
      callback(*args, **kwargs)
    File "/usr/share/virt-manager/virtManager/object/libvirtobject.py", line 
66, in newfn
      ret = fn(self, *args, **kwargs)
    File "/usr/share/virt-manager/virtManager/object/domain.py", line 1279, in 
startup
      self._backend.create()
    File "/usr/lib64/python3.8/site-packages/libvirt.py", line 1234, in create
      if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
  libvirt.libvirtError: internal error: process exited while connecting to 
monitor: 2020-03-25T13:43:39.107524Z qemu-system-aarch64: -device 
vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: VFIO_MAP_DMA: -22
  2020-03-25T13:43:39.107560Z qemu-system-aarch64: -device 
vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: vfio 0000:29:00.0: 
failed to setup container for group 28: memory listener initialization failed: 
Region mach-virt.ram: vfio_dma_map(0x563169753c80, 0x40000000, 0x100000000, 
0x7fb2a3e00000) = -22 (Invalid argument)

  
  I played with memory and 3054 MB is maximum value possible to boot VM with 
coldplugged host PCIe cards.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1869006/+subscriptions



reply via email to

[Prev in Thread] Current Thread [Next in Thread]