[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Address translation - virt->phys->ram

From: Anthony Liguori
Subject: Re: [Qemu-devel] Address translation - virt->phys->ram
Date: Mon, 22 Feb 2010 10:52:37 -0600
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv: Gecko/20091209 Fedora/3.0-4.fc12 Lightning/1.0pre Thunderbird/3.0

On 02/22/2010 10:46 AM, Ian Molton wrote:
Anthony Liguori wrote:


But this function has some subtle characteristics.  It may return a
bounce buffer if you attempt to map MMIO memory.  There is a limited
pool of bounce buffers available so it may return NULL in the event that
it cannot allocate a bounce buffer.

It may also return a partial result if you're attempting to map a region
that straddles multiple memory slots.
Thanks. I had found this, but was unsure as to wether it was quite what
I wanted. (also is it possible to tell when it has (eg.) allocated a
bounce buffer?)

Basically, I need to get buffer(s) from guest userspace into the hosts
address space. The buffers are virtually contiguous but likely
physically discontiguous. They are allocated with malloc() and theres
nothing I can do about that.

The obvious but slow solution would be to copy all the buffers into nice
virtio-based scatter/gather buffers and feed them to the host that way,
however its not fast enough.

Why is this slow?


Anthony Liguori

Right now I have a little driver I have written that allows a buffer to
be mmap()ed by the guest userspace, and this is pushed to the host via
virtio s/g io when the guest calls fsync(). This buffer contains the
data that must be passed to the host, however this data may often
contain pointers to (that is, userspace virtual addresses of) buffers of
unknown sizes which the host also needs to access. These buffers are
what I need to read from the guests RAM.

The buffers will likely remain active across multiple different calls to
the host so their pages will need to be available. As the calls always
happen when that specific process is active, I'd guess the worst we need
to do is generate a page fault to unswap the page(s). Can that be caused
by qemu (under kvm)?

It seems that cpu_physical_memory_map() deals with physically contiguous
areas of guest address space. I need to get a host-side mapping of a
*virtually* contiguous (possibly physically discontiguous) set of guest
pages. If this can be done, it'd mean direct transfer of data from guest
application to host shared library, which would be a major win.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]