qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Benchmark (was: Re: [PATCH] upgraded mmap patches)


From: Alexander E. Patrakov
Subject: [Qemu-devel] Benchmark (was: Re: [PATCH] upgraded mmap patches)
Date: Tue, 11 Jan 2005 21:15:24 +0500
User-agent: KNode/0.8.1

Magnus Damm wrote:

> On Sun, 09 Jan 2005 21:39:42 +0500, Alexander E. Patrakov
> <address@hidden> wrote:
>> Magnus Damm wrote:
>> 
>> > Hi All,
>> >
>> > I have just finished upgrading the CONFIG_MMU_MAP (aka mmap) patches
>> > written by Piotrek. I have only tested to build the code with
>> > --target-list=i386-softmmu and only on x86 hosts running Linux. I am
>> > able to run Win XP SP2 as guest both with and without mmap
>> > acceleration with the patch.
>> >
>> > The changes are not many since the last release by Piotrek, I just
>> > made sure the code applied to CVS of today and yeah, I added some
>> > basic code to support quad-word memory access. It is totally untested
>> > and probably broken right now. Someone please look at part4 of the
>> > patch and verify that I read 32-bit words the right way.
>> >
>> > Sorry PowerPC folks, no ppc host support yet. I will port up the ppc
>> > host patches after someone has fixed so ppc compiles cleanly...
>> >
>> > The first three parts are simply upgraded versions of the files
>> > v1-part[1-3].patch.gz that Piotrek posted to the list around a month
>> > ago. Part4 contains changes by me.
>> >
>> > Enjoy, please report back to the list with problems and benchmarks!
>> 
>> Is a new patched qemu really supposed to be much slower than the original
>> in the case when /proc/sys/vm/max_map_count doesn't contain a number
>> that's big enough to enable acceleration?
> 
> I don't think so. I believe that the code should just fall back on the
> ordinary softmmu implementation in that case. But I do not know the
> code very well.
> 
> Did you experience the same effect with the old version of the patches?

Sorry for the noise. This problem was in fact related to a DNS timeout. See
objective benchmark results below.

As a benchmark, I used the following command:

time madplay -o null:- bigfile.mp3

(i.e. a decode-only test)

The test was being run twice in order for bigfile.mp3 to be fully cached.
The second run is what matters.

madplay is version 0.15.2 both on host and on guest
Guest is an old (but post-6.0) SVN version of LFS, it uses the kernel 2.6.10
Host is Debian Unstable with the 2.4.27-2 kernel, the maximum user rtc
frequency is set to 1024 Hz. The clock in the guest is accurate with such
settings.

bigfile.mp3 is 112 kbps 44 kHz joint-stereo, and it lasts 5 minutes 45
seconds.

On the host, cat /proc/cpuinfo results in the following:

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 11
model name      : Intel(R) Celeron(TM) CPU                1200MHz
stepping        : 1
cpu MHz         : 1202.750
cache size      : 256 KB
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 2
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 mmx fxsr sse
bogomips        : 2398.61

On the host, decoding takes 6.644s real.

In regular i386-softmmu qemu, decoding takes 1m54s, that results in slowdown
factor of 17.3 WRT the host.

In patched qemu with insufficient max_map_count setting, decoding takes
2m04s, that's 18.6x slowdown.

In patched qemu with good max_map_count setting, decoding takes 1m17s,
that's 11.6x slowdown - a big improvement over the results above. And
indeed, madplay can play mp3s inside qemu in real time.

However, I think this benchmark is a bit synthetic. Namely, why almost 50%
CPU usage during mp3 playback when those figures say it should be 22%?

(side note: sound is implemented much better than in VMware - I got no
dropouts despite such a bad kernel combination and the fact that alsa+dmix
are used both on the host and in the guest).

-- 
Alexander E. Patrakov





reply via email to

[Prev in Thread] Current Thread [Next in Thread]