qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Re: [PATCHv2 10/12] tap: add vhost/vhostfd options


From: Marcelo Tosatti
Subject: Re: [Qemu-devel] Re: [PATCHv2 10/12] tap: add vhost/vhostfd options
Date: Tue, 2 Mar 2010 13:21:36 -0300
User-agent: Mutt/1.5.20 (2009-08-17)

On Tue, Mar 02, 2010 at 10:12:05AM -0600, Anthony Liguori wrote:
> On 03/02/2010 09:53 AM, Paul Brook wrote:
> >>>>The key difference is that these regions are created and destroyed
> >>>>rarely and in such a way that the destruction is visible to the guest.
> >>>So you're making ram unmap an asynchronous process, and requiring that
> >>>the address space not be reused until that umap has completed?
> >>It technically already would be.  If you've got a pending DMA
> >>transaction and you try to hot unplug badness will happen.  This is
> >>something that is certainly exploitable.
> >Hmm, I guess we probably want to make this work with all mappings then. DMA 
> >to
> >a ram backed PCI BAR (e.g. video ram) is certainly feasible.
> >Technically it's not the unmap that causes badness, it's freeing the
> >underlying ram.
> 
> Let's avoid confusing terminology.  We have RAM mappings and then we
> have PCI BARs that are mapped as IO_MEM_RAM.
> 
> PCI BARs mapped as IO_MEM_RAM are allocated by the device and live
> for the duration of the device.  If you did something that changed
> the BAR's mapping from IO_MEM_RAM to an actual IO memory type, then
> you'd continue to DMA to the allocated device memory instead of
> doing MMIO operations.[1]
> 
> That's completely accurate and safe.  If you did this to bare metal,
> I expect you'd get very similar results.
> 
> This is different from DMA'ing to a RAM region and then removing the
> RAM region while the IO is in flight.  In this case, the mapping
> disappears and you potentially have the guest writing to an invalid
> host pointer.
> 
> [1] I don't think it's useful to support DMA'ing to arbitrary
> IO_MEM_RAM areas.  Instead, I think we should always bounce to this
> memory.  The benefit is that we avoid the complications resulting
> from PCI hot unplug and reference counting.

Agree. Thus the suggestion to tie cpu_physical_memory_map to qdev
infrastructure.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]