|
From: | Maxime Coquelin |
Subject: | Re: [Qemu-devel] [PATCH 6/6] spec/vhost-user spec: Add IOMMU support |
Date: | Wed, 17 May 2017 16:10:46 +0200 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.0 |
On 05/17/2017 04:53 AM, Jason Wang wrote:
On 2017年05月16日 23:16, Michael S. Tsirkin wrote:On Mon, May 15, 2017 at 01:45:28PM +0800, Jason Wang wrote:On 2017年05月13日 08:02, Michael S. Tsirkin wrote:On Fri, May 12, 2017 at 04:21:58PM +0200, Maxime Coquelin wrote:On 05/11/2017 08:25 PM, Michael S. Tsirkin wrote:On Thu, May 11, 2017 at 02:32:46PM +0200, Maxime Coquelin wrote:This patch specifies and implements the master/slave communication to support device IOTLB in slave. The vhost_iotlb_msg structure introduced for kernel backends is re-used, making the design close between the two backends. An exception is the use of the secondary channel to enable the slave to send IOTLB miss requests to the master. Signed-off-by: Maxime Coquelin <address@hidden> ---docs/specs/vhost-user.txt | 75 +++++++++++++++++++++++++++++++++++++++++++++++hw/virtio/vhost-user.c | 31 ++++++++++++++++++++ 2 files changed, 106 insertions(+) diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt index 5fa7016..4a1f0c3 100644 --- a/docs/specs/vhost-user.txt +++ b/docs/specs/vhost-user.txt @@ -97,6 +97,23 @@ Depending on the request type, payload can be: log offset: offset from start of supplied file descriptorwhere logging starts (i.e. where guest address 0 would be logged)+ * An IOTLB message + --------------------------------------------------------- + | iova | size | user address | permissions flags | type | + --------------------------------------------------------- + + IOVA: a 64-bit guest I/O virtual addressguest -> VMOk.+ Size: a 64-bit sizeHow do you specify "all memory"? give special meaning to size 0?Good point, it does not support all memory currently.It is not vhost-user specific, but general to the vhost implementation.But iommu needs it to support passthrough.Probably not, we will just pass the mappings in vhost_memory_region to vhost. Its memory_size is also a __u64. ThanksThat's different since that's chunks of qemu virtual memory. IOMMU maps IOVA to GPA.But we're in fact cache IOVA -> HVA mapping in the remote IOTLB. When passthrough mode is enabled, IOVA == GPA, so passing mappings in vhost_memory_region should be fine.
Not sure this is a good idea, because when configured in passthrough, QEMU will see the IOMMU as enabled, so the the VIRTIO_F_IOMMU_PLATFORM feature will be negotiated if both guest and backend support it. So how the backend will know whether it should directly pick the translation directly into the vhost_memory_region, or translate it through the device IOTLB? Maybe the solution would be for QEMU to wrap "all memory" IOTLB updates & invalidations to vhost_memory_regions, since the backend won't anyway be able to perform accesses outside these regions?
The only possible "issue" with "all memory" is if you can not use a single TLB invalidation to invalidate all caches in remote TLB.
If needed, maybe we could introduce a new VHOST_IOTLB_INVALIDATE message type? For older kernel backend that doesn't support it, -EINVAL will be returned, so QEMU could handle it another way in this case.
But this is only theoretical problem since it only happen when we have a 1 byte mapping [2^64 - 1, 2^64) cached in remote TLB. Consider:- E.g intel IOMMU has a range limitation for invalidation (1G currently) - Looks like all existed IOMMU use page aligned mappingsIt was probably not a big issue. And for safety we could use two invalidations to make sure all caches were flushed remotely. Or just change the protocol from start, size to start, end. Vhost-kernel is probably too late for this change, but I'm still not quite sure it is worthwhile.
I'm not for diverging the protocol between kernel & user backends. Thanks, Maxime
Thanks
[Prev in Thread] | Current Thread | [Next in Thread] |