[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Address translation - virt->phys->ram

From: Alexander Graf
Subject: Re: [Qemu-devel] Address translation - virt->phys->ram
Date: Mon, 22 Feb 2010 19:56:59 +0100
User-agent: Thunderbird (X11/20090817)

Ian Molton wrote:
> Anthony Liguori wrote:
>> On 02/22/2010 10:46 AM, Ian Molton wrote:
>>> Anthony Liguori wrote:
>>>> cpu_physical_memory_map().
>>>> But this function has some subtle characteristics.  It may return a
>>>> bounce buffer if you attempt to map MMIO memory.  There is a limited
>>>> pool of bounce buffers available so it may return NULL in the event that
>>>> it cannot allocate a bounce buffer.
>>>> It may also return a partial result if you're attempting to map a region
>>>> that straddles multiple memory slots.
>>> Thanks. I had found this, but was unsure as to wether it was quite what
>>> I wanted. (also is it possible to tell when it has (eg.) allocated a
>>> bounce buffer?)
>>> Basically, I need to get buffer(s) from guest userspace into the hosts
>>> address space. The buffers are virtually contiguous but likely
>>> physically discontiguous. They are allocated with malloc() and theres
>>> nothing I can do about that.
>>> The obvious but slow solution would be to copy all the buffers into nice
>>> virtio-based scatter/gather buffers and feed them to the host that way,
>>> however its not fast enough.
>> Why is this slow?
> Because the buffers will all have to be copied. So far, switching from
> abusing an instruction to interrupt qemu to using virtio has incurred a
> roughly 5x slowdown. I'd guess much of this is down to the fact we have
> to switch to kernel-mode on the guest and back again for every single GL
> call...
> If I can establish some kind of stable guest_virt->phys->host_virt
> mapping, many of the problems will just 'go away'. a way to interrupt
> qemu from user-mode on the guest without involving the guest kernel
> would be quite awesome also (theres really nothing we want the kernel to
> actually /do/ here, it just adds overhead).

I guess what you really want is some shm region between host and guess
that you can use as ring buffer. Then you could run a timer on the host
side to flush it or have some sort of callback when you urgently need to
flush it manually.

The benefit here is that you can actually make use of multiple threads.
There's no need to intercept the guest at all just because it wants to
issue some GL operations.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]