[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [PATCH 2/2] tcg/ppc*: Move cache initialization to ppc sp

From: Alexander Graf
Subject: Re: [Qemu-ppc] [PATCH 2/2] tcg/ppc*: Move cache initialization to ppc specific code
Date: Mon, 3 Oct 2011 23:36:48 +0200

On 03.10.2011, at 23:10, Stefan Weil wrote:

> Am 03.10.2011 22:52, schrieb Scott Wood:
>> On 10/03/2011 03:43 PM, Stefan Weil wrote:
>>> qemu_cache_utils_init() is only used by ppc / ppc64 tcg targets
>>> to initialize the cache before flush_icache_range() is called.
>>> This patch moves the code to tcg/ppc and tcg/ppc64.
>>> Initialisation is called from tcg_target_init() there.
>>> Signed-off-by: Stefan Weil <address@hidden>
>> This is not only needed for TCG. We need flush_icache_range() for KVM.
>> See http://patchwork.ozlabs.org/patch/90403/ and the thread starting
>> with http://lists.gnu.org/archive/html/qemu-ppc/2011-09/msg00180.html
>> And must this be duplicated between ppc and ppc64?
>> -Scott
> Your patch 90403 is obviously still missing in QEMU master -
> that's the reason why I did not notice that PPC KVM needs
> flush_icache_range().
> qemu_cache_utils_init() should be called from kvm_init()
> and tcg_init() or some function called there, and
> cache-utils.o only generated for ppc hosts.

With TCG, we're never executing guest code directly, but always go through TCG 
to emulate it. So the only case where we actually need to flush the icache is 
in TCG code generation, never outside, right?

For KVM, I agree. We need some indication to flush the cache. But it doesn't 
have to be done that complicated. We can simply do an inline function that gets 
always called and has a few conditionals on when to actually flush. That inline 
function could easily be a nop on !ppc, though I'm not 100% sure that no other 
arch needs this.

> As I don't have a ppc host, it would be better if you or
> Alex could provide a working patch.

Last time I checked that thread Scott was referring to was still active :).


reply via email to

[Prev in Thread] Current Thread [Next in Thread]