qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/4] chardev: make qemu_chr_fe_set_handlers() co


From: Peter Xu
Subject: Re: [Qemu-devel] [PATCH 2/4] chardev: make qemu_chr_fe_set_handlers() context switching safer
Date: Fri, 22 Feb 2019 16:56:33 +0800
User-agent: Mutt/1.10.1 (2018-07-13)

On Thu, Feb 21, 2019 at 04:03:57PM +0800, Peter Xu wrote:

[...]

> > +static gboolean
> > +main_context_wait_cb(gpointer user_data)
> > +{
> > +    struct MainContextWait *w = user_data;
> > +
> > +    qemu_mutex_lock(&w->lock);
> > +    qemu_cond_signal(&w->cond);
> > +    /* wait until switching is over */
> > +    qemu_cond_wait(&w->cond, &w->lock);
> 
> Could previous signal() directly wake up itself here?  Man
> pthread_cond_broadcast says:
> 
>        The pthread_cond_signal() function shall unblock at least one
>        of the threads that are blocked on the specified condition
>        variable cond (if any threads are blocked on cond).
> 
>        If more than one thread is blocked on a condition variable, the
>        scheduling policy shall determine the order in which threads
>        are unblocked.
> 
> So AFAIU it could, because neither there's a restriction on ordering
> of how waiters are waked up, nor there's a limitation on how many
> waiters will be waked up by a single signal().
> 
> Why not simply use two semaphores?  Then locks can be avoided too.

Please feel free to skip this question.  I think when cond_signal()
right before cond_wait() this thread is not yet in the waiting list so
at least my question seems invalid.  Then cond+lock looks fine
comparing to sems.  Sorry for the noise.

Regards,

-- 
Peter Xu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]