qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH] qemu-kvm: response to SIGUSR1 to start/stop a V


From: Avi Kivity
Subject: [Qemu-devel] Re: [PATCH] qemu-kvm: response to SIGUSR1 to start/stop a VCPU (v2)
Date: Thu, 02 Dec 2010 14:41:35 +0200
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.12) Gecko/20101103 Fedora/1.0-0.33.b2pre.fc14 Lightning/1.0b3pre Thunderbird/3.1.6

On 12/02/2010 01:47 PM, Srivatsa Vaddagiri wrote:
On Thu, Dec 02, 2010 at 11:17:52AM +0200, Avi Kivity wrote:
>  On 12/01/2010 09:09 PM, Peter Zijlstra wrote:
>  >>
>  >>   We are dealing with just one task here (the task that is yielding).
>  >>   After recording how much timeslice we are "giving up" in 
current->donate_time
>  >>   (donate_time is perhaps not the right name to use), we adjust the 
yielding
>  >>   task's vruntime as per existing logic (for ex: to make it go to back of
>  >>   runqueue). When the yielding tasks gets to run again, lock is hopefully
>  >>   available for it to grab, we let it run longer than the default 
sched_slice()
>  >>   to compensate for what time it gave up previously to other threads in 
same
>  >>   runqueue. This ensures that because of yielding upon lock contention, 
we are not
>  >>   leaking bandwidth in favor of other guests. Again I don't know how much 
of
>  >>   fairness issue this is in practice, so unless we see some numbers I'd 
prefer
>  >>   sticking to plain yield() upon lock-contention (for unmodified guests 
that is).
>  >
>  >No, that won't work. Once you've given up time you cannot add it back
>  >without destroying fairness.

Over shorter intervals perhaps. Over longer interval (few seconds to couple of
minutes), fairness should not be affected because of this feedback? In any case,
don't we have similar issues with directed yield as well?

Directed yield works by donating vruntime to another thread. If you don't have vruntime, you can't donate it (well, you're guaranteed to have some, since you're running, but if all you have is a microsecond's worth, that's what you can donate).

>
>  What I'd like to see in directed yield is donating exactly the
>  amount of vruntime that's needed to make the target thread run.

I presume this requires the target vcpu to move left in rb-tree to run
earlier than scheduled currently and that it doesn't involve any
change to the sched_period() of target vcpu?

Just was wondering how this would work in case of buggy guests. Lets say that a
guest ran into a AB<->BA deadlock. VCPU0 spins on lock B (held by VCPU1
currently), while VCPU spins on lock A (held by VCPU0 currently). Both keep
boosting each other's vruntime, potentially affecting fairtime for other guests
(to the point of starving them perhaps)?

We preserve vruntime overall. If you give vruntime to someone, it comes at your own expense. Overall vruntime is preserved.


--
error compiling committee.c: too many arguments to function




reply via email to

[Prev in Thread] Current Thread [Next in Thread]