[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH] qemu-kvm: response to SIGUSR1 to start/stop a V

From: Peter Zijlstra
Subject: [Qemu-devel] Re: [PATCH] qemu-kvm: response to SIGUSR1 to start/stop a VCPU (v2)
Date: Wed, 01 Dec 2010 20:35:36 +0100

On Wed, 2010-12-01 at 14:24 -0500, Rik van Riel wrote:
> On 12/01/2010 02:07 PM, Peter Zijlstra wrote:
> > On Wed, 2010-12-01 at 12:26 -0500, Rik van Riel wrote:
> >> On 12/01/2010 12:22 PM, Peter Zijlstra wrote:
> >> The pause loop exiting&  directed yield patches I am working on
> >> preserve inter-vcpu fairness by round robining among the vcpus
> >> inside one KVM guest.
> >
> > I don't necessarily think that's enough.
> >
> > Suppose you've got 4 vcpus, one is holding a lock and 3 are spinning.
> > They'll end up all three donating some time to the 4th.
> >
> > The only way to make that fair again is if due to future contention the
> > 4th cpu donates an equal amount of time back to the resp. cpus it got
> > time from. Guest lock patterns and host scheduling don't provide this
> > guarantee.
> You have no guarantees when running virtualized, guest
> CPU time could be taken away by another guest just as
> easily as by another VCPU.
> Even if we equalized the amount of CPU time each VCPU
> ends up getting across some time interval, that is no
> guarantee they get useful work done, or that the time
> gets fairly divided to _user processes_ running inside
> the guest.

Right, and Jeremy was working on making the guest load-balancer aware of
that so the user-space should get fairly scheduled on service (of
course, that's assuming you run a linux guest with that logic in).

> The VCPU could be running something lock-happy when
> it temporarily gives up the CPU, and get extra CPU time
> back when running something userspace intensive.
> In-between, it may well have scheduled to another task
> (allowing it to get more CPU time).
> I'm not convinced the kind of fairness you suggest is
> possible or useful.

Well, physical cpus get equal service, but yeah, time loss due to
contention could probably be talked equivalent to do non-equal service
in the vcpu case.

Anyway, don't take it as a critique per-se, your approach sounds like
the sanest proposal yet.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]