qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] hw/vfio: let readonly flag take effect for mmaped regions


From: Yan Zhao
Subject: Re: [PATCH] hw/vfio: let readonly flag take effect for mmaped regions
Date: Sun, 29 Mar 2020 21:35:27 -0400
User-agent: Mutt/1.9.4 (2018-02-28)

On Sat, Mar 28, 2020 at 01:25:37AM +0800, Alex Williamson wrote:
> On Fri, 27 Mar 2020 11:19:34 +0000
> address@hidden wrote:
> 
> > From: Yan Zhao <address@hidden>
> > 
> > currently, vfio regions without VFIO_REGION_INFO_FLAG_WRITE are only
> > read-only when VFIO_REGION_INFO_FLAG_MMAP is not set.
> > 
> > regions with flag VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_MMAP
> > are only read-only in host page table for qemu.
> > 
> > This patch sets corresponding ept page entries read-only for regions
> > with flag VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_MMAP.
> > 
> > accordingly, it ignores guest write when guest writes to the read-only
> > regions are trapped.
> > 
> > Signed-off-by: Yan Zhao <address@hidden>
> > Signed-off-by: Xin Zeng <address@hidden>
> > ---
> 
> Currently we set the r/w protection on the mmap, do I understand
> correctly that the change in the vfio code below results in KVM exiting
> to QEMU to handle a write to a read-only region and therefore we need
> the memory.c change to drop the write?  This prevents a SIGBUS or
> similar?
yes, correct. the change in memory.c is to prevent a SIGSEGV in host as
it's mmaped to read-only. we think it's better to just drop the writes
from guest rather than corrupt the qemu.

> 
> Meanwhile vfio_region_setup() uses the same vfio_region_ops for all
> regions and vfio_region_write() would still allow writes, so if the
> device were using x-no-mmap=on, I think we'd still get a write to this
> region and expect the vfio device to drop it.  Should we prevent that
> write in QEMU as well?
yes, it expects vfio device to drop it right now.
As the driver sets the flag without VFIO_REGION_INFO_FLAG_WRITE, it should
handle it properly.
both dropping in qemu and dropping in vfio device are fine to us.
we wonder which one is your preference :)


> Can you also identify what device and region requires this so that we
> can decide whether this is QEMU 5.0 or 5.1 material?  PCI BARs are of
> course always R/W and the ROM uses different ops and doesn't support
> mmap, so this is a device specific region of some sort.  Thanks,
> 
It's a virtual mdev device for which we want to emulate a virtual
read-only MMIO BAR.
Is there any consideration that PCI BARs have to be R/W ?
we didn't find it out in PCI specification.

Thanks
Yan


> 
> >  hw/vfio/common.c | 4 ++++
> >  memory.c         | 3 +++
> >  2 files changed, 7 insertions(+)
> > 
> > diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> > index 0b3593b3c0..e901621ca0 100644
> > --- a/hw/vfio/common.c
> > +++ b/hw/vfio/common.c
> > @@ -971,6 +971,10 @@ int vfio_region_mmap(VFIORegion *region)
> >                                            name, region->mmaps[i].size,
> >                                            region->mmaps[i].mmap);
> >          g_free(name);
> > +
> > +        if (!(region->flags & VFIO_REGION_INFO_FLAG_WRITE)) {
> > +            memory_region_set_readonly(&region->mmaps[i].mem, true);
> > +        }
> >          memory_region_add_subregion(region->mem, region->mmaps[i].offset,
> >                                      &region->mmaps[i].mem);
> >  
> > diff --git a/memory.c b/memory.c
> > index 601b749906..4b1071dc74 100644
> > --- a/memory.c
> > +++ b/memory.c
> > @@ -1313,6 +1313,9 @@ static void memory_region_ram_device_write(void 
> > *opaque, hwaddr addr,
> >      MemoryRegion *mr = opaque;
> >  
> >      trace_memory_region_ram_device_write(get_cpu_index(), mr, addr, data, 
> > size);
> > +    if (mr->readonly) {
> > +        return;
> > +    }
> >  
> >      switch (size) {
> >      case 1:
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]