qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 5/9] memory: let address_space_rw/ld*/st* run ou


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH 5/9] memory: let address_space_rw/ld*/st* run outside the BQL
Date: Thu, 25 Jun 2015 10:13:43 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0


On 24/06/2015 20:50, Alex Bennée wrote:
> 
> Paolo Bonzini <address@hidden> writes:
> 
>> On 24/06/2015 18:56, Alex Bennée wrote:
>>> This is where I get confused between the advantage of this over however
>>> same pid recursive locking. If you use recursive locking you don't need
>>> to add a bunch of state to remind you of when to release the lock.
>>>
>>> Am I missing something here?
>>
>> The semantics of recursive locking with respect to condition variables
>> are not clear.  Either cond_wait releases all locks, and then the mutex
>> can be released when the code doesn't expect it to be, or cond_wait
>> doesn't release all locks and then you have deadlocks.
>>
>> So, recursive locking is okay if you don't have condition variables
>> attached to the lock (and if you cannot do without it), but
>> qemu_global_mutex does have them.
> 
> Ahh OK, so I was missing something ;-)
> 
>> QEMU has so far tried to use the solution that Stevens outlines here:
>> https://books.google.it/books?id=kCTMFpEcIOwC&pg=PA434
> 
> Unfortunately I can't read that link but it sounds like I should get
> myself a copy of the book.

Try following the link from
https://en.wikipedia.org/wiki/Reentrant_mutex#References.

> I take it that approach wouldn't approve of:
> 
> static __thread int iothread_lock_count;
> 
> void qemu_mutex_lock_iothread(void)
> {
>     if (iothread_lock_count == 0) {
>         qemu_mutex_lock(&qemu_global_mutex);
>     }
>     iothread_lock_count++;
> }
> 
> void qemu_mutex_unlock_iothread(void)
> {
>     iothread_lock_count--;
>     if (iothread_lock_count==0) {
>         qemu_mutex_unlock(&qemu_global_mutex);
>     }
>     if (iothread_lock_count < 0) {
>         fprintf(stderr,"%s: error, too many unlocks %d\n", __func__,
>                 iothread_lock_count);
>     }
> }
> 
> Which should achieve the same "only one lock held" semantics but still
> make the calling code a little less worried about tracking the state.

This is effectively implementing the "other" semantics: cond_wait always
drops the lock.

BTW, fine-grained recursive mutexes are bad for another reason: you can
think of "getting the mutex" as "ensuring all the data structure's
invariant are respected" (At the time you acquire the lock, no other
thread is modifying the state, so any invariant left at unlock time must
still be there).  This is not true if you can get the mutex recursively.
 But I'm honestly not sure how much of this argument applies for
something as coarse as the iothread lock.

The best argument I have against recursive mutexes is that it's really a
one-way street.  Once you've decided to make a mutex recursive, it's
really hard to make it non-recursive.

Paolo

>>
>> Paolo
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]