qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 03/10] intel-iommu: add iommu lock


From: Tian, Kevin
Subject: Re: [Qemu-devel] [PATCH 03/10] intel-iommu: add iommu lock
Date: Fri, 27 Apr 2018 07:19:25 +0000

> From: Peter Xu
> Sent: Friday, April 27, 2018 2:26 PM
> 
> On Fri, Apr 27, 2018 at 01:13:02PM +0800, Jason Wang wrote:
> >
> >
> > On 2018年04月25日 12:51, Peter Xu wrote:
> > > Add a per-iommu big lock to protect IOMMU status.  Currently the only
> > > thing to be protected is the IOTLB cache, since that can be accessed
> > > even without BQL, e.g., in IO dataplane.
> > >
> > > Note that device page tables should not need any protection.  The
> safety
> > > of that should be provided by guest OS.  E.g., when a page entry is
> > > freed, the guest OS should be responsible to make sure that no device
> > > will be using that page any more.

device page table definitely doesn't require protection, since it is
in-memory structure managed by guest. However the reason
above is not accurate - there is no way that guest OS can make
sure no device uses non-present page entry, otherwise it doesn't
require virtual IOMMU to protect itself. There could be bogus/
malicious drivers which surely may program the device to attempt so.

> > >
> > > Reported-by: Fam Zheng<address@hidden>
> > > Signed-off-by: Peter Xu<address@hidden>
> > > ---
> > >   include/hw/i386/intel_iommu.h |  8 ++++++++
> > >   hw/i386/intel_iommu.c         | 31 +++++++++++++++++++++++++++++--
> > >   2 files changed, 37 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/include/hw/i386/intel_iommu.h
> b/include/hw/i386/intel_iommu.h
> > > index 220697253f..1a8ba8e415 100644
> > > --- a/include/hw/i386/intel_iommu.h
> > > +++ b/include/hw/i386/intel_iommu.h
> > > @@ -262,6 +262,14 @@ struct IntelIOMMUState {
> > >       uint8_t w1cmask[DMAR_REG_SIZE]; /* RW1C(Write 1 to Clear) bytes
> */
> > >       uint8_t womask[DMAR_REG_SIZE];  /* WO (write only - read returns
> 0) */
> > >       uint32_t version;
> > > +    /*
> > > +     * Protects IOMMU states in general.  Normally we don't need to
> > > +     * take this lock when we are with BQL held.  However we have code
> > > +     * paths that may run even without BQL.  In those cases, we need
> > > +     * to take the lock when we have access to IOMMU state
> > > +     * informations, e.g., the IOTLB.

better if you can whitelist those paths here to clarify.

> > > +     */
> > > +    QemuMutex iommu_lock;
> >
> > Some questions:
> >
> > 1) Do we need to protect context cache too?
> 
> IMHO the context cache entry should work even without lock.  That's a
> bit trickly since we have two cases that this cache will be updated:
> 
>   (1) first translation of the address space of a device
>   (2) invalidation of context entries
> 
> For (2) IMHO we don't need to worry about since guest OS should be
> controlling that part, say, device should not be doing any translation
> (DMA operations) when the context entry is invalidated.

again above cannot be assumed.

> 
> For (1) the worst case is that the context entry cache be updated
> multiple times with the same value by multiple threads.  IMHO that'll
> be fine too.
> 
> But yes for sure we can protect that too with the iommu lock.
> 
> > 2) Can we just reuse qemu BQL here?
> 
> I would prefer not.  As I mentioned, at least I have spent too much
> time on fighting BQL already.  I really hope we can start to use
> isolated locks when capable.  BQL is always the worst choice to me.
> 
> > 3) I think the issue is common to all other kinds of IOMMU, so can we
> simply
> > synchronize before calling ->translate() in memory.c. This seems a more
> > common solution.
> 
> I suspect Power and s390 live well with that.  I think it mean at
> least these platforms won't have problem in concurrency.  I'm adding
> DavidG in loop in case there is further comment.  IMHO we should just
> make sure IOMMU code be thread safe, and we fix problem if there is.
> 
> Thanks,
> 
> --
> Peter Xu


reply via email to

[Prev in Thread] Current Thread [Next in Thread]