qemu-stable
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/1] virtio: fix the condition for iommu_platform not supp


From: Daniel Henrique Barboza
Subject: Re: [PATCH v2 1/1] virtio: fix the condition for iommu_platform not supported
Date: Fri, 28 Jan 2022 09:12:26 -0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0



On 1/28/22 08:48, Halil Pasic wrote:
On Fri, 28 Jan 2022 08:02:39 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

We may be able to differentiate between the two using ->dma_as, but for
that it needs to be set up correctly: whenever you require translation
it should be something different than address_space_memory. The question
is why do you require translation but don't have your ->dma_as set up
properly? It can be a guest thing, i.e. guest just assumes it has to do
bus addresses, while it actually does not have to, or we indeed do have
an IOMMU which polices the devices access to the guest memory, but for
some strange reason we failed to set up ->dma_as to reflect that.


I have 2 suggestions. First is to separate how we interpret iommu_platform. I 
find it
hard to do this properly without creating a new flag/command line option.


A new command line option looks problematic to me because of the
existing setups. We could tie that to a compat machine, but it looks
ugly and also a little wrong from where I stand.

My second suggestion is, well .... I think it's proved that s390x-PV and AMD 
SEV are
being impacted (and probably Power secure guests as well), so why not check for
confidential guest support to skip that check entirely? Something like this 
patch:


This is not acceptable for s390x and it should not be acceptable for SEV
or Power secure guests, because s390x Secure Execution ()support predates
the confidential guest support patches and "->cgs", and thus you don't
have to turn on CGS to use SE. Just providing the iommu_platform=on
manually on each device is perfectly fine! Should be the same for SEV

Hm, that's unfortunate. Checking machine->cgs would be an easy way out.


[..]
+    if (!machine->cgs && has_iommu &&
+        !virtio_host_has_feature(vdev, VIRTIO_F_IOMMU_PLATFORM)) {
           error_setg(errp, "iommu_platform=true is not supported by the 
device");
           return;
       }
[..]

This will not break anything for non-secure guests and, granted that 
machine->cgs is already
set at this point, this will fix the problem for s390x-PV and AMD SEV. And we 
won't have to
dive deep into a virtio-bus feature negotiation saga because of something that 
can be easily
handled for machine->cgs guests only.

Your assumption does not hold. See above. Unfortunately my assumption of
->dma_as == & address_space_memory implies does not need translation
does not hold either. But IMHO we should really get to the bottom of
that, because it just does not make sense.


I'll make an attempt to understand the logic in Power side.



If this patch works for you and Brijesh I believe this is a good option.

I don't believe it is a good option. @Brijesh can you confirm that SEV
has the same problem with this approach s390x has, and that it would
break existing setups?

I have another idea, but my problem is that I don't understand enough of
the Power and PCI stuff. Anyway if for your plattform iommu_platform=on
devices can not work in a VM that does not have an IOMMU you could
error out on that. You could express that via a machine property, and
then make sure your dma address space is not address_space_memory, if
that machine property is set.


Bear in mind that the root problem of what I've reported up there isn't 
something that's
just Power specific. Any arch in which vhost-user-fs-pci doesn not support 
iommu_platform
will have the problem as well (e.g. x86 and the RH bug Kevin fixed).


What I mean is that I can fix my side using the PowerPC PCI specifications and 
be done
with it, but that would not help x86 for example. I believe a better way is to 
use the
PowerPC case to understand where the overall common logic can be improved to 
everyone.


Thanks,


Daniel




Regards,
Halil



reply via email to

[Prev in Thread] Current Thread [Next in Thread]