|
From: | Anthony Liguori |
Subject: | Re: [Qemu-devel] [RESEND][PATCH 0/3] Fix guest time drift under heavy load. |
Date: | Wed, 05 Nov 2008 10:43:46 -0600 |
User-agent: | Thunderbird 2.0.0.17 (X11/20080925) |
Gleb Natapov wrote:
So? I raise them now. Have you tried suggested scenario and was able to reproduce the problem?
Sorry, I mistyped. I meant to say, I don't think any of the problems raised when this was initially posted have been addressed. Namely, I asked for hard data on how much this helped things and Paul complained that this fix only fixed things partially, and was very invasive to other architectures.
Basically, there are two hurdles to overcome here. The first is that I don't think it's overwhelmingly obvious that this is the correct solution in all scenarios. We need to understand the scenarios it helps and by how much. We then probably need to make sure to limit this operation to those specific scenarios.
The second is that this is not how hardware behaves normally. This makes it undesirably from an architectural perspective. If it's necessary, we need to find a way to minimize it's impact in much the way -win2k-hack's impact is minimized.
The time drift is eliminated. If there is a spike in a load time may slow down, but after that it catches up (this happens only during very high loads though).
How bad is time drift without it. Under workload X, we lose N seconds per Y hours and with this patch, under the same workload, we lose M seconds per Y hours and N << M.
I strongly, strongly doubt that you'll be eliminating drift 100%. And please describe workload X in such a way that it is 100% reproducible. If you're using a multimedia file to do this, please provide a link to obtain the multimedia file.
How does having a high resolution timer in the host affect the problem to begin with?My test machine has relatively recent kernel that use high resolution timers for time keeping. Also the problem is that guest does not receive enough time to process injected interrupt. How hr timer can help here?
If the host can awaken QEMU 1024 times a second and QEMU can deliver a timer interrupt each time, there is no need for time drift fixing.
I would think that with high res timers on the host, you would have to put the host under heavy load before drift began occurring.
How do Linux guests behave with this?Linux guests don't use pit or RTC for time keeping. They are completely unaffected by those patches.
They certainly can, under the right circumstances.
Even the Windows PV spec calls out three separate approaches to dealing with missed interrupts and provides an interface for the host to query the guest as to which one should be used. I don't think any solution that uses a single technique is going to be correct.That is what I found in Microsoft docs: If a virtual processor is unavailable for a sufficiently long period of time, a full timer period may be missed. In this case, the hypervisor uses one of two techniques. The first technique involves timer period modulation, in effect shortening the period until the timer “catches up”. If a significant number of timer signals have been missed, the hypervisor may be unable to compensate by using period modulation. In this case, some timer expiration signals may be skipped completely. For timers that are marked as lazy, the hypervisor uses a second technique for dealing with the situation in which a virtual processor is unavailable for a long period of time. In this case, the timer signal is deferred until this virtual processor is available. If it doesn’t become available until shortly before the next timer is due to expire, it isskipped entirely.The first techniques is what I am trying to introduce with this patch series.
There is a third technique whereas the hypervisor is supposed to modulate the delivery of missed ticks by ensuring an even distribution of them across the next few time slices. The windows guest is supposed to be able to tell the hypervisor which technique it should be using.
Regards, Anthony Liguori
-- Gleb.
[Prev in Thread] | Current Thread | [Next in Thread] |