qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Bug 1869006] Re: PCIe cards passthrough to TCG guest works on 2GB of gu


From: Greg Zdanowski
Subject: [Bug 1869006] Re: PCIe cards passthrough to TCG guest works on 2GB of guest memory but fails on 4GB (vfio_dma_map invalid arg)
Date: Tue, 20 Oct 2020 21:11:15 -0000

@alex-l-williamson: is there any safe(ish) way to ignore RMRR coming
from BIOS?

I don't know how IOMMU actually works in the kernel but theoretically if kernel 
had a flag forcing it to ignore certain RMRRs? If I understand this correctly 
ignoring an RMRR entry may cause two things:
1) DMA failure if remapping is attempted
2) If something (e.g. KVM) touches that region because we ignored RMRR the 
device memory may get corrupted

Linux already has mechanisms to ignore stubborn BIOSes (e.g. disabled
x2APIC with no option to enable it in the BIOS).


The only thing I'm worried about is the thing you said:
> The more significant aspect when RMRRs are involved in this restriction is 
> that an RMRR is
> essentially the platform firmware dictating that the host OS must maintain an 
> identity map
> between the device and a range of physical address space. We don't know the 
> purpose of that
> mapping, but we can assume that it allows the device to provide ongoing data 
> for platform 
> firmware to consume.

Does this mean that if a kernel is "blind" to a given RMRR region
something else may break because these regions need to be treated in
some special manner outside of not touching them for IOMMU?

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1869006

Title:
  PCIe cards passthrough to TCG guest works on 2GB of guest memory but
  fails on 4GB (vfio_dma_map invalid arg)

Status in QEMU:
  New

Bug description:
  During one meeting coworker asked "did someone tried to passthrough
  PCIe card to other arch guest?" and I decided to check it.

  Plugged SATA and USB3 controllers into spare slots on mainboard and
  started playing. On 1GB VM instance it worked (both cold- and hot-
  plugged). On 4GB one it did not:

  Błąd podczas uruchamiania domeny: internal error: process exited while 
connecting to monitor: 2020-03-25T13:43:39.107524Z qemu-system-aarch64: -device 
vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: VFIO_MAP_DMA: -22
  2020-03-25T13:43:39.107560Z qemu-system-aarch64: -device 
vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: vfio 0000:29:00.0: 
failed to setup container for group 28: memory listener initialization failed: 
Region mach-virt.ram: vfio_dma_map(0x563169753c80, 0x40000000, 0x100000000, 
0x7fb2a3e00000) = -22 (Invalid argument)

  Traceback (most recent call last):
    File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in 
cb_wrapper
      callback(asyncjob, *args, **kwargs)
    File "/usr/share/virt-manager/virtManager/asyncjob.py", line 111, in tmpcb
      callback(*args, **kwargs)
    File "/usr/share/virt-manager/virtManager/object/libvirtobject.py", line 
66, in newfn
      ret = fn(self, *args, **kwargs)
    File "/usr/share/virt-manager/virtManager/object/domain.py", line 1279, in 
startup
      self._backend.create()
    File "/usr/lib64/python3.8/site-packages/libvirt.py", line 1234, in create
      if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
  libvirt.libvirtError: internal error: process exited while connecting to 
monitor: 2020-03-25T13:43:39.107524Z qemu-system-aarch64: -device 
vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: VFIO_MAP_DMA: -22
  2020-03-25T13:43:39.107560Z qemu-system-aarch64: -device 
vfio-pci,host=0000:29:00.0,id=hostdev0,bus=pci.3,addr=0x0: vfio 0000:29:00.0: 
failed to setup container for group 28: memory listener initialization failed: 
Region mach-virt.ram: vfio_dma_map(0x563169753c80, 0x40000000, 0x100000000, 
0x7fb2a3e00000) = -22 (Invalid argument)

  
  I played with memory and 3054 MB is maximum value possible to boot VM with 
coldplugged host PCIe cards.

To manage notifications about this bug go to:
https://bugs.launchpad.net/qemu/+bug/1869006/+subscriptions



reply via email to

[Prev in Thread] Current Thread [Next in Thread]