qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 02/12] util: add atomic64


From: Emilio G. Cota
Subject: Re: [Qemu-devel] [PATCH v2 02/12] util: add atomic64
Date: Tue, 18 Sep 2018 14:49:21 -0400
User-agent: Mutt/1.9.4 (2018-02-28)

On Tue, Sep 18, 2018 at 10:23:32 -0300, Murilo Opsfelder Araujo wrote:
> On Tue, Sep 11, 2018 at 04:43:04PM -0400, Emilio G. Cota wrote:
> > On Tue, Sep 11, 2018 at 05:43:38 -0700, Richard Henderson wrote:
> > > On 09/10/2018 04:27 PM, Emilio G. Cota wrote:
> > > > +#define GEN_READ(name, type)                    \
> > > > +    type name(const type *ptr)                  \
> > > > +    {                                           \
> > > > +        QemuSpin *lock = addr_to_lock(ptr);     \
> > > > +        type ret;                               \
> > > > +                                                \
> > > > +        qemu_spin_lock(lock);                   \
> > > > +        ret = *ptr;                             \
> > > > +        qemu_spin_unlock(lock);                 \
> > > > +        return ret;                             \
> > > > +    }
> > > > +
> > > > +GEN_READ(atomic_read_i64, int64_t)
> > > > +GEN_READ(atomic_read_u64, uint64_t)
> > >
> > > Is there really a good reason to have two external
> > > functions instead of having one of them inline and
> > > perform a cast?
> >
> > Not really. I can do a follow-up patch if you want me to.
> >
> > > Is this any better than using libatomic?
> >
> > I didn't think of using libatomic. I just checked the source
> > code and it's quite similar:
> > - It uses 64 locks instead of 16 ($page_size/$cache_line,
> >   but these are hard-coded for POSIX as 4096/64, respectively)
> > - We compute the cacheline size and corresponding padding
> >   at run-time, which is a little better than the above.
> > - The locks are implemented as pthread_mutex instead of
> >   spinlocks. I think spinlocks make more sense here because
> >   we do not expect huge contention (systems without
> >   !CONFIG_ATOMIC64 have few cores); for libatomic it makes
> >   sense to use mutexes because it might be used in many-core
> >   machines.
> >
> > So yes, we could have used libatomic. If you feel strongly
> > about it, I can give it a shot.
> 
> Would allowing user to choose between libatomic or Emilio's implementation
> at configure time be a bad idea?
> 
> One could pass, for example, --with-libatomic to configure script to use
> libatomic instead of using Emilio's implementation, which could be the
> default implementation - or vice-versa.

If we decide to use libatomic then there's no need to keep
our own atomic64. As I said in another message, both
implementations are very similar.

libatomic supports more operations, but at the moment
we only need 64-bit atomic_read/write.

Thanks,

                Emilio



reply via email to

[Prev in Thread] Current Thread [Next in Thread]