[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Kernel memory allocation debugging with Qemu

From: Blue Swirl
Subject: Re: [Qemu-devel] Kernel memory allocation debugging with Qemu
Date: Fri, 8 Feb 2008 21:13:09 +0200

On 2/8/08, Paul Brook <address@hidden> wrote:
> > The patch takes a half of the memory and slows down the system. I
> > think Qemu could be used instead. A channel (IO/MMIO) is created
> > between the memory allocator in target kernel and Qemu running in the
> > host. Memory allocator tells the allocated area to Qemu using the
> > channel. Qemu changes the physical memory mapping for the area to
> > special memory that will report any reads before writes back to
> > allocator. Writes change the memory back to standard RAM. The
> > performance would be comparable to Qemu in general and host kernel +
> > Qemu only take a few MB of the memory. The system would be directly
> > usable for other OSes as well.
> The qemu implementation isn't actually any more space efficient than the
> in-kernel implementation. You still need the same amount of bookkeeping ram.
> In both cases it should be possible to reduce the overhead from 1/2 to 1/9 by
> using a bitmask rather than whole bytes.

Qemu would not track all memory, only the regions that kmalloc() have
given to other kernel that have not yet been written to.

> Performance is a less clear. A qemu implementation probably causes less
> relative slowdown than an in-kernel implementation. However it's still going
> to be significantly slower than normal qemu.  Remember that any checked
> access is going to have to go through the slow case in the TLB lookup. Any
> optimizations that are applicable to one implementation can probably also be
> applied to the other.

Again, we are not trapping all accesses. The fast case should be used
for most kernel accesses and all of userland.

> Given qemu is significantly slower to start with, and depending on the
> overhead of taking the page fault, it might not end up much better overall. A
> KVM implementation would most likely be slower than the in-kernel.
> That said it may be an interesting thing to play with. In practice it's
> probably most useful to generate an interrupt and report back to the guest
> OS, rather than having qemu reports faults directly.

The access could happen when the interrupts are disabled, so a buffer
should be needed. The accesses could also be written to a block device
seen by both Qemu and the kernel, or appear to arrive from a fake
network device.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]