qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v2] queue_work proposal


From: Glauber Costa
Subject: Re: [Qemu-devel] [RFC v2] queue_work proposal
Date: Thu, 22 Oct 2009 16:57:23 -0200
User-agent: Jack Bauer

On Thu, Oct 22, 2009 at 03:37:05PM -0200, Marcelo Tosatti wrote:
> On Thu, Sep 03, 2009 at 02:01:26PM -0400, Glauber Costa wrote:
> > Hi guys
> > 
> > In this patch, I am attaching an early version of a new "on_vcpu" mechanism 
> > (after
> > making it generic, I saw no reason to keep its name). It allows us to 
> > guarantee
> > that a piece of code will be executed in a certain vcpu, indicated by a 
> > CPUState.
> > 
> > I am sorry for the big patch, I just dumped what I had so we can have early 
> > directions.
> > When it comes to submission state, I'll split it accordingly.
> > 
> > As we discussed these days at qemu-devel, I am using 
> > pthread_set/get_specific for
> > dealing with thread-local variables. Note that they are not used from 
> > signal handlers.
> > A first optimization would be to use TLS variables where available.
> > 
> > In vl.c, I am providing a version of queue_work for the IO-thread, and 
> > other for normal
> > operation. The "normal" one should fix the problems Jan is having, since it 
> > does nothing
> > more than just issuing the function we want to execute.
> > 
> > The io-thread version is tested with both tcg and kvm, and works (to the 
> > extent they were
> > working before, which in kvm case, is not much)
> > 
> > Changes from v1:
> >  * Don't open the possibility of asynchronous calling queue_work, suggested 
> > by
> >    Avi "Peter Parker" Kivity
> >  * Use a local mutex, suggested by Paolo Bonzini
> > 
> > Signed-off-by: Glauber Costa <address@hidden>
> > ---
> >  cpu-all.h  |    3 ++
> >  cpu-defs.h |   15 ++++++++++++
> >  exec.c     |    1 +
> >  kvm-all.c  |   58 +++++++++++++++++++---------------------------
> >  kvm.h      |    7 +++++
> >  vl.c       |   75 
> > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  6 files changed, 125 insertions(+), 34 deletions(-)
> > 
> > diff --git a/cpu-all.h b/cpu-all.h
> > index 1a6a812..529479e 100644
> > --- a/cpu-all.h
> > +++ b/cpu-all.h
> > @@ -763,6 +763,9 @@ extern CPUState *cpu_single_env;
> >  extern int64_t qemu_icount;
> >  extern int use_icount;
> >  
> > +void qemu_queue_work(CPUState *env, void (*func)(void *data), void *data);
> > +void qemu_flush_work(CPUState *env);
> > +
> >  #define CPU_INTERRUPT_HARD   0x02 /* hardware interrupt pending */
> >  #define CPU_INTERRUPT_EXITTB 0x04 /* exit the current TB (use for x86 a20 
> > case) */
> >  #define CPU_INTERRUPT_TIMER  0x08 /* internal timer exception pending */
> 
> > @@ -3808,6 +3835,50 @@ void qemu_cpu_kick(void *_env)
> >          qemu_thread_signal(env->thread, SIGUSR1);
> >  }
> >  
> > +void qemu_queue_work(CPUState *env, void (*func)(void *data), void *data)
> > +{
> > +    QemuWorkItem wii;
> > +
> > +    env->queued_total++;
> > +
> > +    if (env == qemu_get_current_env()) {
> > +        env->queued_local++;
> > +        func(data);
> > +        return;
> > +    }
> > +
> > +    wii.func = func;
> > +    wii.data = data;
> > +    qemu_mutex_lock(&env->queue_lock);
> > +    TAILQ_INSERT_TAIL(&env->queued_work, &wii, entry);
> > +    qemu_mutex_unlock(&env->queue_lock);
> > +
> > +    qemu_thread_signal(env->thread, SIGUSR1);
> > +
> > +    qemu_mutex_lock(&env->queue_lock);
> > +    while (!wii.done) {
> > +        qemu_cond_wait(&env->work_cond, &qemu_global_mutex);
> > +    }
> > +    qemu_mutex_unlock(&env->queue_lock);
> 
> How's qemu_flush_work supposed to execute if env->queue_lock is held
> here?
> 
> qemu_cond_wait() should work with env->queue_lock, and qemu_global_mutex
> should be dropped before waiting and reacquired on return.
After some thinking, I don't plan to introduce this until it is absolutely 
needed.
I believe we can refactor a lot of code to actually run on the vcpu it should,
instead of triggering a remove event.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]