qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH 0/4] Improve -icount, fix it with iothread


From: Paolo Bonzini
Subject: [Qemu-devel] Re: [PATCH 0/4] Improve -icount, fix it with iothread
Date: Wed, 23 Feb 2011 13:42:59 +0100
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101209 Fedora/3.1.7-0.35.b3pre.fc14 Lightning/1.0b3pre Mnenhy/0.8.3 Thunderbird/3.1.7

On 02/23/2011 12:08 PM, Edgar E. Iglesias wrote:
>  No, this supersedes Marcelo's patch.  10-20% doesn't seem comparable to
>  "looks like it deadlocked" anyway.  Also, Jan has ideas on how to remove
>  the synchronization overhead in the main loop for TCG+iothread.
I see. I tried booting two of my MIPS and CRIS linux guests with iothread
and -icount 4. Without your patch, the boot crawls super slow. Your patch
gives a huge improvement. This was the "deadlock" scenario which I
mentioned in previous emails.

Just to clarify the previous test where I saw slowdown with your patch:
A CRIS setup that has a CRIS and basically only two peripherals,
a timer block and a device (X) that computes stuff but delays the results
with a virtual timer. The guest CPU is 99% of the time just
busy-waiting for device X to get ready.

This latter test runs in 3.7s with icount 4 and without iothread,
with or without your patch.

Thanks for testing this.

With icount 4 and iothread it runs in ~1m5s without your patch and
~1m20s with your patch. That was the 20% slowdown I mentioned earlier.

Ok, so it is in both cases with iothread. We go from 16x slowdown to 19x on one testcase :) and "huge improvement" on another. (Also, the CRIS images on qemu.org simply hang for me without my patch and numeric icount---and the watchdog triggers---so that's another factor in favor of the patches). I guess we can live with the slowdown for now, if somebody else finds the patch okay.

Do you have images for the slow test?

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]