qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: POLL: Why do you use kqemu?


From: Avi Kivity
Subject: [Qemu-devel] Re: POLL: Why do you use kqemu?
Date: Mon, 08 Jun 2009 16:24:10 +0300
User-agent: Thunderbird 2.0.0.21 (X11/20090320)

Jan Kiszka wrote:
Avi Kivity wrote:
Jan Kiszka wrote:
And the fact that kqemu has to use tcg in order to achieve a reasonable
performance is rather a disadvantage. The complexity and overhead for
synchronizing tcg with the in-kernel accelerator is enormous. If there
were a feasible way to overcome this with kqemu, it would benefit a lot.
But unfortunately there is none (given you don't want to invest
reasonable efforts).
Note that kvm suffers from something similar (to a smaller magnitude) as
well: if a guest pages in its page tables, kvm knows nothing about it
and will thus have outdated shadows.  To date we haven't encountered a
problem with it, but it's conceivable.  I think Windows can page its
page tables, but maybe it's disabled by default, or maybe it doesn't dma
directly into the page tables.

Can't follow, always thought that kernel space gets informed when some
I/O operation handled by user space modified an "interesting" page.

It doesn't.  Host userspace has unrestricted access to guest memory.

Not sure how to fix.  Maybe write protect the host page tables when we

You mean guest page table?

Both :)

When kvm write protects a guest page table in the shadow page table entries pointing to that guest page, it should also write protect the guest page table in the host page table entries to the same guest page.

shadow a page table, and get an mmu notifier to tell us when its made
writable?  Seems expensive.  Burying head in sand is much easier.


Does this still apply to nested paging? I guess (hope) not...

No, nested paging brings cancer and cures world peace.  Or something.

--
error compiling committee.c: too many arguments to function





reply via email to

[Prev in Thread] Current Thread [Next in Thread]