qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] arm mptimer implementation - why prescaler is multiply


From: Krzeminski, Marcin (Nokia - PL/Wroclaw)
Subject: Re: [Qemu-devel] arm mptimer implementation - why prescaler is multiply by 10?
Date: Thu, 29 Oct 2015 07:00:51 +0000

 

 

From: EXT Peter Crosthwaite [mailto:address@hidden
Sent: Tuesday, October 27, 2015 7:23 PM
To: Peter Maydell <address@hidden>
Cc: Dmitry Osipenko <address@hidden>; Krzeminski, Marcin (Nokia - PL/Wroclaw) <address@hidden>; address@hidden
Subject: Re: arm mptimer implementation - why prescaler is multiply by 10?

 

 

 

On Tue, Oct 27, 2015 at 11:19 AM, Peter Maydell <address@hidden> wrote:

On 27 October 2015 at 18:01, Peter Crosthwaite
<address@hidden> wrote:
> On Tue, Oct 27, 2015 at 7:19 AM, Dmitry Osipenko <address@hidden> wrote:
>> From my observation, Linux kernel is booting noticeably faster in the
>> emulated guest and host machine CPU usage is lower if we "artificially"
>> slowdown the MPtimer. You really shouldn't use it for the RTC, so doing that
>> trick shouldn't affect guest behavior.

Do you mean qemu or real hw?


> So I do wonder whether with your ptimer conversion this will be obsoleted,
> as the rate limiter there may do the work for us.

We still need to pick a nominal PERIPHCLK somehow, and that's
still a pretty arbitrary choice I think (and it doesn't
depend on the CPU speed itself: PERIPHCLK's period can be
any multiple of the main CPU CLK (minimum 2)).

 

Yep. But is it nice to know if we can move towards board level configuration of this without the rate-limiting problem. Rather than a 10x rate limiter it should be a QOM property for PERIPHCLK frequency.

Regards,

Peter

 

thanks
-- PMM

 

I made some tests by changing implementation to work with PERIPHCLK=600MHz, and in fact overhead was to high to work comfortably with linux guest, but the key problem was networking.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]