qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] qemu & kernel :address generated are non-uniform


From: sparsh mittal
Subject: Re: [Qemu-devel] qemu & kernel :address generated are non-uniform
Date: Mon, 21 Nov 2011 17:00:59 -0600

Thanks for the answer.  For right now, I am OK with this, since I realized that cache access is uniform (cache sets are calculated by modulo) and my work concerns with cache. Still, I am happy to know that it is expected thing and not unexpected.

Also I have one very important question and I would be grateful if could give an answer:
I am using Marss cycle-accurate simulator, which uses QEMU. It is a full system simulator and gives both user and kernel stats. However, even if I take only user-stats, the statistics vary a lot between two runs. To clarify it a bit: If I run a simulation with xyz configuration, once and then second time (without making change), then statistics differ.

 I was wondering if it has something to do with qemu. Or any qemu option, that can make simulation deterministic. I tried using -icount auto and still some variation is there.

Is it true that the load on host machine affects qemu operation? My friend observed that if two simulations (with different configurations) are run in parallel, then the variation is more than if they were to be run in series (one after another).

With simplescalar eio files, I have never observed any variation. I would be grateful for some help.

Thanks and Regards
Sparsh Mittal




On Sun, Nov 20, 2011 at 8:13 PM, Mulyadi Santosa <address@hidden> wrote:
On Fri, Nov 18, 2011 at 21:49, sparsh mittal <address@hidden> wrote:
> GBrange numberOfAddresses
>
> 0-0.5---> 3325
>
> 0.5-1---> 1253
>
> 1-1.5---> 0
>
> 1.5-2---> 30
>
> 2-2.5---> 0
>
> 2.5-3---> 1708
>
> 3-3.5---> 10521
>
> 3.5-4---> 0
>
> 4-4.5--> 15428

Hi...

I never observe the above address usage like you did, but I think that
is expected.

The reason is that Linux kernel tends to allocate from high memory
(above 896 MiB ) to allocate pages, including their page tables. This
is done to lower the "pressure" against normal memory zone.

Now for the "unbalance" case, I guess that's due the high usage of
slab. I am not sure where in fact they are started to be placed in
RAM. One thing for sure is that they act as cache for frequest used
objects such task structs, bio, socket buffers.

So, as you can take a guess. It's a mechanism in Linux memory
management which is quite complicated. Not sure if there's shortcut to
shape this up.

--
regards,

Mulyadi Santosa
Freelance Linux trainer and consultant

blog: the-hydra.blogspot.com
training: mulyaditraining.blogspot.com


reply via email to

[Prev in Thread] Current Thread [Next in Thread]