qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Profiling Qemu for speed?


From: Karl Magdsick
Subject: Re: [Qemu-devel] Profiling Qemu for speed?
Date: Mon, 18 Apr 2005 00:31:25 -0400

Ideally, we could force gcc to implement switch statements as indirect
jumps with jump tables inline with the code.  However, this may not be
possible.

I think Nathaniel was just saying that gcc is likely generating
several hundred sequential if-else blocks for large switch statements.
 This gives you O(n) runtime, whereas a function pointer array gives
you O(1) runtime.  (This assumes each case ends with a break... I
haven't looked at the code to which Nathaniel is referring.) 
Therefore, for sufficiently large switch statements, sequential
if-else statements will be slower than a function pointer array.  The
question is: is n=~ 200 sufficiently large?  I think only empirical
testing has a chance of answering this question in everyone's mind.

If you smartly nest your if-else statements, you can at least get
O(log n) runtime, with negligible difference in the leading constant
as compared to the sequential if-else statements.

There's also the possibility of (at startup) dynamically generating an
indirect jump along with a jump table inline with the code (in order
to minimize page faults, and perhaps utilize the prefectch to get the
first few jump addresses in the L2 cache).

Of course, as pointed out elsewhere, the dynamic code generator is
hopefully only invoked rarely, so speeding up the switch statements
may not have a noticeable effect on emulation speed.


-Karl

On 4/17/05, André Braga <address@hidden> wrote:
> The problem with table lookups (I'm assuming you're talking about
> function pointer vectors) is that they *destroy* spatial locality of
> reference that you could otherwise attain by having series of
> if-then-else instructions and some clever instruction prefetching
> mechanism on modern processors... Not to mention the function call
> overhead.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]