qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/5] mc146818rtc: fix Windows VM clock faster


From: Xiao Guangrong
Subject: Re: [Qemu-devel] [PATCH 0/5] mc146818rtc: fix Windows VM clock faster
Date: Thu, 13 Apr 2017 16:52:55 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0



On 04/13/2017 04:39 PM, Xiao Guangrong wrote:


On 04/13/2017 02:37 PM, Paolo Bonzini wrote:


On 12/04/2017 17:51, address@hidden wrote:
The root cause is that the clock will be lost if the periodic period is
changed as currently code counts the next periodic time like this:
      next_irq_clock = (cur_clock & ~(period - 1)) + period;

consider the case if cur_clock = 0x11FF and period = 0x100, then the
next_irq_clock is 0x1200, however, there is only 1 clock left to trigger
the next irq. Unfortunately, Windows guests (at least Windows7) change
the period very frequently if it runs the attached code, so that the
lost clock is accumulated, the wall-time become faster and faster

Very interesting.


Yes, indeed.

However, I think that the above should be exactly how the RTC should
work.  The original RTC circuit had 22 divider stages (see page 13 of
the datasheet[1], at the bottom right), and the periodic interrupt taps
the rising edge of one of the dividers (page 16, second paragraph).  The
datasheet also never mentions a comparator being used to trigger the
periodic interrupts.


That was my thought before, however, after more test, i am not sure if
re-configuring RegA changes these divider stages internal...

Have you checked that this Windows bug doesn't happen on real hardware
too?  Or is the combination of driftfix=slew and changing periods that
is a problem?


I have two physical windows 7 machines, both of them have
'useplatformclock = off' and ntp disabled, the wall time is really
accurate. The difference is that the physical machines are using Intel
Q87 LPC chipset which is mc146818rtc compatible. However, on VM, the
issue is easily be reproduced just in ~10 mins.

Our test mostly focus on 'driftfix=slew' and after this patchset the
time is accurate and stable.

I will do the test for dropping 'slew' and see what will happen...


Well, the time is easily observed to be faster if 'driftfix=slew' is
not used. :(




reply via email to

[Prev in Thread] Current Thread [Next in Thread]