qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 07/18] qemu-thread: add simple test-and-set s


From: Sergey Fedorov
Subject: Re: [Qemu-devel] [PATCH v5 07/18] qemu-thread: add simple test-and-set spinlock
Date: Tue, 17 May 2016 23:35:57 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.7.2

On 17/05/16 22:38, Emilio G. Cota wrote:
> On Tue, May 17, 2016 at 20:13:24 +0300, Sergey Fedorov wrote:
>> On 14/05/16 06:34, Emilio G. Cota wrote:
(snip)
>>> +        while (atomic_read(&spin->value)) {
>>> +            cpu_relax();
>>> +        }
>>> +    }
>> Looks like relaxed atomic access can be a subject to various
>> optimisations according to
>> https://gcc.gnu.org/wiki/Atomic/GCCMM/AtomicSync#Relaxed.
> The important thing here is that the read actually happens
> on every iteration; this is achieved with atomic_read().
> Barriers etc. do not matter here because once we exit
> the loop, the try to acquire the lock -- and if we succeed,
> we then emit the right barrier.

I just can't find where it is stated that an expression like
"__atomic_load(ptr, &_val, __ATOMIC_RELAXED)" has a _compiler_ barrier
or volatile access semantic. Hopefully, cpu_relax() serves as a compiler
barrier. If we rely on that, we'd better put a comment about it.

Kind regards,
Sergey

>>> +static inline bool qemu_spin_locked(QemuSpin *spin)
>>> +{
>>> +    return atomic_read_acquire(&spin->value);
>> Why not just atomic_read()?
> I think atomic_read() is better, yes. I'll change it. I went
> with the fence because I wanted to have at least a caller
> of atomic_read_acquire :P
>
> I also hesitated between calling it _locked or _is_locked;
> I used _locked for consistency with qemu_mutex_iothread_locked,
> although I think _is_locked is a bit clearer:
> qemu_spin_locked(foo)
>    is a little too similar to
> qemu_spin_lock(foo).
>
> Thanks,
>
>               Emilio




reply via email to

[Prev in Thread] Current Thread [Next in Thread]