qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] atomics: add explicit compiler fence in __atomi


From: Peter Maydell
Subject: Re: [Qemu-devel] [PATCH] atomics: add explicit compiler fence in __atomic memory barriers
Date: Wed, 3 Jun 2015 13:25:44 +0100

On 3 June 2015 at 13:21, Paolo Bonzini <address@hidden> wrote:
> __atomic_thread_fence does not include a compiler barrier; in the
> C++11 memory model, fences take effect in combination with other
> atomic operations.  GCC implements this by making __atomic_load and
> __atomic_store access memory as if the pointer was volatile, and
> leaves no trace whatsoever of acquire and release fences in the
> compiler's intermediate representation.
>
> In QEMU, we want memory barriers to act on all memory, but at the same
> time we would like to use __atomic_thread_fence for portability reasons.
> Add compiler barriers manually around the __atomic_thread_fence.
>
> Signed-off-by: Paolo Bonzini <address@hidden>
> ---
>  include/qemu/atomic.h | 12 +++++++++---
>  1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
> index 98e05ca..bd2c075 100644
> --- a/include/qemu/atomic.h
> +++ b/include/qemu/atomic.h
> @@ -99,7 +99,13 @@
>
>  #ifndef smp_wmb
>  #ifdef __ATOMIC_RELEASE
> -#define smp_wmb()   __atomic_thread_fence(__ATOMIC_RELEASE)
> +/* __atomic_thread_fence does not include a compiler barrier; instead,
> + * the barrier is part of __atomic_load/__atomic_store's "volatile-like"
> + * semantics. If smp_wmb() is a no-op, absence of the barrier means that
> + * the compiler is free to reorder stores on each side of the barrier.
> + * Add one here, and similarly in smp_rmb() and smp_read_barrier_depends().
> + */
> +#define smp_wmb()   ({ barrier(); __atomic_thread_fence(__ATOMIC_RELEASE); 
> barrier(); })

The comment says "add one" but the patch is adding two.
An explanation of why you need a barrier on both sides and
can't manage with just one might be helpful.

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]