qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v1 3/3] intel_iommu: add scalable-mode option to m


From: Peter Xu
Subject: Re: [Qemu-devel] [RFC v1 3/3] intel_iommu: add scalable-mode option to make scalable mode work
Date: Fri, 15 Feb 2019 13:39:05 +0800
User-agent: Mutt/1.10.1 (2018-07-13)

On Fri, Feb 15, 2019 at 01:22:34PM +0800, Yi Sun wrote:

[...]

> > > +    /* TODO: read cap/ecap from host to decide which cap to be exposed. 
> > > */
> > > +    if (s->scalable_mode) {
> > > +        if (!s->caching_mode) {
> > > +            error_report("Need to set caching-mode for scalable mode");
> > 
> > Could I ask why?
> > 
> My intention is to make guest explicitly send to make sure SLT shadowing
> correctly.
> 
> For this point, I also have question. Why does legacy mode not check CM?
> If CM is not set, may the DMA remapping be wrong because SLT cannot
> match guest's latest change?

Because CM is currently only required for device assignment.  For
example, if you only have an emulated device like virtio-net-pci in
the guest, then you don't need the CM capability to let it work with
the vIOMMU.  That's because we'll walk the 2nd level IOMMU page table
only when the virtio-net-pci device does DMA, and QEMU can easily do
that (QEMU is emulating the DMA of the virtio-net-pci device, and QEMU
has the knowledge of the guest 2nd level IOMMU page tables).  Assigned
devices are special because the host hardware knows nothing about
guest 2nd level page tables, so QEMU needs to shadow them before DMA
starts.

That's also why I listed "device assignment" as a special case in the
test device matrix, because that's the only case where we can torture
the IOMMU page shadowing code a bit.

For the scalable mode I would suppose you will still allow it to work
without caching mode.  The example is the same as above - when there's
only emulated devices in the guest.

Regards,

-- 
Peter Xu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]