qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 03/11] dataplane: add host memory mapping cod


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH v4 03/11] dataplane: add host memory mapping code
Date: Wed, 5 Dec 2012 13:22:26 +0200

On Wed, Dec 05, 2012 at 09:31:56AM +0100, Stefan Hajnoczi wrote:
> On Thu, Nov 29, 2012 at 04:36:08PM +0200, Michael S. Tsirkin wrote:
> > On Thu, Nov 29, 2012 at 03:26:56PM +0100, Stefan Hajnoczi wrote:
> > > On Thu, Nov 29, 2012 at 03:54:25PM +0200, Michael S. Tsirkin wrote:
> > > > On Thu, Nov 22, 2012 at 04:16:44PM +0100, Stefan Hajnoczi wrote:
> > > > > The data plane thread needs to map guest physical addresses to host
> > > > > pointers.  Normally this is done with cpu_physical_memory_map() but 
> > > > > the
> > > > > function assumes the global mutex is held.  The data plane thread does
> > > > > not touch the global mutex and therefore needs a thread-safe memory
> > > > > mapping mechanism.
> > > > > 
> > > > > Hostmem registers a MemoryListener similar to how vhost collects and
> > > > > pushes memory region information into the kernel.  There is a
> > > > > fine-grained lock on the regions list which is held during lookup and
> > > > > when installing a new regions list.
> > > > > 
> > > > > When the physical memory map changes the MemoryListener callbacks are
> > > > > invoked.  They build up a new list of memory regions which is finally
> > > > > installed when the list has been completed.
> > > > > 
> > > > > Note that this approach is not safe across memory hotplug because 
> > > > > mapped
> > > > > pointers may still be in used across memory unplug.  However, this is
> > > > > currently a problem for QEMU in general and needs to be addressed in 
> > > > > the
> > > > > future.
> > > > > 
> > > > > Signed-off-by: Stefan Hajnoczi <address@hidden>
> > > > 
> > > > Worth bothering with binary search?
> > > > vhost does a linear search over regions because
> > > > the number of ram regions is very small.
> > > 
> > > memory.c does binary search.  I did the same but in practice there are
> > > <20 regions for a simple VM.  It's probably not worth it but without
> > > performance results this is speculation.
> > > 
> > > I think there's no harm in using binary search to start with.
> > > 
> > > > > +static void hostmem_listener_append_region(MemoryListener *listener,
> > > > > +                                           MemoryRegionSection 
> > > > > *section)
> > > > > +{
> > > > > +    Hostmem *hostmem = container_of(listener, Hostmem, listener);
> > > > > +
> > > > > +    if (memory_region_is_ram(section->mr)) {
> > > > > +        hostmem_append_new_region(hostmem, section);
> > > > > +    }
> > > > 
> > > > I think you also need to remove VGA region since you
> > > > don't mark these pages as dirty so access there won't work.
> > > 
> > > I don't understand.  If memory in the VGA region returns true from
> > > memory_region_is_ram(), why would there be a problem?
> > 
> > If you change this memory but you don't update the display.
> > Never happens with non buggy guests but we should catch and fail if it does.
> 
> Okay, I took a look at the VGA code and I think it makes sense now.  We
> have VRAM as a regular RAM region so that writes to it are cheap.  To
> avoid scanning or redrawing VRAM on every update we use dirty logging.
> 
> Since virtio-blk-data-plane does not mark pages dirty an I/O buffer in
> VRAM would fail to update the display correctly.
> 
> I will try to put in a check to omit the VGA region.

There are many ways to do this but I guess the simplest
is to detect dirty logging and skip that region.

>  It can be dropped
> in the future when we use the memory API with dirty logging from the
> data plane thread.
> 
> Stefan

Or not - there's also the issue that e.g. cirrus doing tricks
with memory mapping at data path. So skipping
that region might help performance.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]