[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 07/18] s390x: protvirt: Inhibit balloon when switching to

From: David Hildenbrand
Subject: Re: [PATCH v5 07/18] s390x: protvirt: Inhibit balloon when switching to protected mode
Date: Thu, 19 Mar 2020 18:31:11 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.5.0


>> I asked this question already to Michael (cc) via a different channel,
>> but hare is it again:
>> Why does the balloon driver not support VIRTIO_F_IOMMU_PLATFORM? It is
>> absolutely not clear to me. The introducing commit mentioned that it
>> "bypasses DMA". I fail to see that.
>> At least the communication via the SG mechanism should work perfectly
>> fine with an IOMMU enabled. So I assume it boils down to the pages that
>> we inflate/deflate not being referenced via IOVA?
> AFAIU the IOVA/GPA stuff is not the problem here. You have said it
> yourself, the SG mechanism would work for balloon out of the box, as it
> does for the other virtio devices. 
> means according to Michael that the device has full access to the entire
> guest RAM. If VIRTIO_F_ACCESS_PLATFORM is negotiated this may or may not
> be the case.

So you say

"The virtio specification tells that the device is to present
device "can only access certain memory addresses with said access
specified and/or granted by the platform"."

So, AFAIU, *any* virtio device (hypervisor side) has to present this
flag when PV is enabled. In that regard, your patch makes perfect sense
(although I am not sure it's a good idea to overwrite these feature bits
- maybe they should be activated on the cmdline permanently instead when
PV is to be used? (or enable )).

> The actual problem is that the pages denoted by the buffer transmitted
> via the virtqueue are normally not shared pages. I.e. the hypervisor
> can not reuse them (what is the point of balloon inflate). To make this
> work, the guest would need to share the pages before saying 'host these
> are in my balloon, so you can use them'. This is a piece of logic we

What exactly would have to be done in the hypervisor to support it?

Assume we have to trigger sharing/unsharing - this sounds like a very
architecture specific thing? Or is this e.g., doing a map/unmap
operation like mapping/unmapping the SG?

Right now it sounds to me "we have to do $ARCHSPECIFIC when
inflating/deflating in the guest", which feels wrong.

> need only if the host/the device does not have full access to the
> guest RAM. That is in my opinion why the balloon driver fences
> VIRTIO_F_ACCESS_PLATFORM.> Does that make sense?

Yeah, I understood the "device has to set VIRTIO_F_ACCESS_PLATFORM"
part. Struggling with the "what can the guest driver actually do" part.

>> I don't think they have to be IOVA addresses. We're neither reading nor
>> writing these pages. We really speak about "physical memory in the
>> system" when ballooning. Everything else doesn't really make sense.
>> There is no need to map/unmap pages we inflate/deflate AFAIKs.
>> IMHO, we should not try to piggy-back on VIRTIO_F_IOMMU_PLATFORM here,
>> but instead explicitly disable it either in the hypervisor or the guest.
> We need a feature bit here. We can say fencing VIRTIO_F_ACCESS_PLATFORM
> was a bug, fix that bug, and then invent another 'the guest RAM is
> somehow different' feature bit specific to the balloon, and then create
> arch hooks in the driver that get active if this feature is negotiated.
> I assumed the fact that the balloon driver fences
>> I hope someone can clarify what the real issue with an IOMMU and
>> ballooning is, because I'll be having the same "issue" with
> virtio-mem.
> The issue is not with the IOMMU, the issue is with restricted access
> to guest RAM. The definition of VIRTIO_F_ACCESS_PLATFORM is such that we
> pretty much know what's up when VIRTIO_F_ACCESS_PLATFORM is not
> presented, but VIRTIO_F_ACCESS_PLATFORM presented can mean a couple of
> things.



David / dhildenb

reply via email to

[Prev in Thread] Current Thread [Next in Thread]