qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Get host virtual address corresponding to guest physica


From: Blue Swirl
Subject: Re: [Qemu-devel] Get host virtual address corresponding to guest physical address?
Date: Sun, 26 Aug 2012 17:45:59 +0000

On Sat, Aug 25, 2012 at 1:17 PM, 陳韋任 (Wei-Ren Chen)
<address@hidden> wrote:
> On Sat, Aug 25, 2012 at 11:56:13AM +0100, Peter Maydell wrote:
>> On 24 August 2012 04:14, 陳韋任 (Wei-Ren Chen) <address@hidden> wrote:
>> >   I would like to know if there is a function in QEMU which converts
>> > a guest physical address into corresponding host virtual address.
>>
>> So the question is, what do you want to do with the host virtual
>> address when you've got it? cpu_physical_memory_map() is really intended
>> (as Blue says) for the case where you have a bit of host code that wants
>> to write a chunk of data and doesn't want to do a sequence of
>> cpu_physical_memory_read()/_write() calls. Instead you _map() the memory,
>> write to it and then _unmap() it.
>
>   We want to let host MMU hardware to do what softmmu does. As a prototype
> (x86 guest on x86_64 host), we want to do the following:
>
>   1. Get guest page table entries (GVA -> GPA).
>
>   2. Get corresponding HVA.
>
>   3. Then we use /dev/mem (with host cr3) to find out HPA.
>
>   4. We insert GVA -> HPA mapping into host page table through /dev/mem,
>      we already move QEMU above 4G to make way for the guest.
>
> So we don't write data into the host virtual addr.

I don't think this GVA to HPA mapping function will help. I'd use the
memory API to construct the GPA-HVA mappings after board init. The
GVA-GPA mappings need to be gathered from guest MMU tables when MMU is
enabled. Then the page tables need to be tracked and any changes to
either guest MMU setup/tables or in guest physical memory space must
propagate to the host memory maps.

>
>> Note that not all guest physical addresses have a meaningful host
>> virtual address -- in particular memory mapped devices won't.
>
>   I guess in our case, we don't touch MMIO?
>
>> >   1. I am running x86 guest on a x86_64 host and using the cod below
>> >      to get the host virtual address, I am not sure what value of len
>> >      should be.
>>
>> The length should be the length of the area of memory you want to
>> either read or write from.
>
>   Actually I want to know where guest page are mapped to host virtual
> address. The GPA we get from step 1 points to guest page table, and
> we want to know its corresponding HVA.
>
>> >         static inline void *gpa2hva(target_phys_addr_t addr)
>> >         {
>> >             target_phys_addr_t len = 4;
>> >             return cpu_physical_memory_map(addr, &len, 0);
>> >         }
>>
>> If you try this on a memory mapped device address then the first
>> time round it will give you back the address of a "bounce buffer",
>> ie a bit of temporary RAM you can read/write and which unmap will
>> then actually feed to the device's read/write functions. Since you
>> never call unmap, this means that anybody else who tries to use
>> cpu_physical_memory_map() on a device from now on will get back
>> NULL (meaning resource exhaustion, because the bouncebuffer is in
>> use).
>
>   You mean if I call cpu_physical_memory_map with a guest MMIO (physcial)
> address, the first time it'll return the address of a buffer that I can write
> data into. The second time it'll return NULL since I don't call
> cpu_physical_memory_umap to flush the buffer. Do I understand you correctly?
> Hmm, I think we don't not have such issue in our use case... What do you
> think?
>
> Regards,
> chenwj
>
> --
> Wei-Ren Chen (陳韋任)
> Computer Systems Lab, Institute of Information Science,
> Academia Sinica, Taiwan (R.O.C.)
> Tel:886-2-2788-3799 #1667
> Homepage: http://people.cs.nctu.edu.tw/~chenwj



reply via email to

[Prev in Thread] Current Thread [Next in Thread]