qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 01/11] util/qht: Document memory ordering ass


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH v3 01/11] util/qht: Document memory ordering assumptions
Date: Thu, 14 Jul 2016 10:05:26 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1


On 13/07/2016 20:03, Sergey Fedorov wrote:
> On 13/07/16 14:13, Paolo Bonzini wrote:
>>
>> On 12/07/2016 22:13, Sergey Fedorov wrote:
>>> diff --git a/include/qemu/qht.h b/include/qemu/qht.h
>>> index 70bfc68b8d67..5f633e5d8100 100644
>>> --- a/include/qemu/qht.h
>>> +++ b/include/qemu/qht.h
>>> @@ -69,6 +69,9 @@ void qht_destroy(struct qht *ht);
>>>   * Attempting to insert a NULL @p is a bug.
>>>   * Inserting the same pointer @p with different @hash values is a bug.
>>>   *
>>> + * In case of successful operation, smp_wmb() is implied before the 
>>> pointer is
>>> + * inserted into the hash table.
>>> + *
>>>   * Returns true on sucess.
>>>   * Returns false if the @address@hidden pair already exists in the hash 
>>> table.
>>>   */
>>> @@ -86,6 +89,9 @@ bool qht_insert(struct qht *ht, void *p, uint32_t hash);
>>>   * The user-provided @func compares pointers in QHT against @userp.
>>>   * If the function returns true, a match has been found.
>>>   *
>>> + * smp_rmb() is implied before and after the pointer is looked up and 
>>> retrieved
>>> + * from the hash table.
>> Do we really need to guarantee smp_rmb(), or is smp_read_barrier_depends()
>> aka atomic_rcu_read() enough?
> 
> The idea was something like: qht_lookup() can be "paired" with either
> qht_insert() or qht_remove(). The intention was to guarantee independent
> tb_jmp_cache lookup performed before qht_lookup() in tb_find_physical().

Of course; that would not be impacted if qht_lookup() only made a weaker
promise.

Anyway, this patch (or mine or a mix of them) can go in after hard freeze.

Paolo

>>
>> Likewise, perhaps only an implicit smp_wmb() before the insert is
>> "interesting" to qht_insert__locked callers .
> 
> I see the idea behind this patch is worthwhile so I spend more time on
> refining it and I must look at your patch carefully ;)
> 
> Thanks,
> Sergey
> 
>>
>> Something like:
>>
>> diff --git a/include/qemu/qht.h b/include/qemu/qht.h
>> index 70bfc68..f4f1d55 100644
>> --- a/include/qemu/qht.h
>> +++ b/include/qemu/qht.h
>> @@ -69,6 +69,9 @@ void qht_destroy(struct qht *ht);
>>   * Attempting to insert a NULL @p is a bug.
>>   * Inserting the same pointer @p with different @hash values is a bug.
>>   *
>> + * In case of successful operation, smp_wmb() is implied before the pointer 
>> is
>> + * inserted into the hash table.
>> + *
>>   * Returns true on sucess.
>>   * Returns false if the @address@hidden pair already exists in the hash 
>> table.
>>   */
>> @@ -83,6 +86,8 @@ bool qht_insert(struct qht *ht, void *p, uint32_t hash);
>>   *
>>   * Needs to be called under an RCU read-critical section.
>>   *
>> + * smp_read_barrier_depends() is implied before the call to @func.
>> + *
>>   * The user-provided @func compares pointers in QHT against @userp.
>>   * If the function returns true, a match has been found.
>>   *
>> @@ -105,6 +110,10 @@ void *qht_lookup(struct qht *ht, qht_lookup_func_t 
>> func, const void *userp,
>>   * This guarantees that concurrent lookups will always compare against valid
>>   * data.
>>   *
>> + * In case of successful operation, a smp_wmb() barrier is implied before 
>> and
>> + * after the pointer is removed from the hash table.  In other words,
>> + * a successful qht_remove acts as a bidirectional write barrier.
>> + *
>>   * Returns true on success.
>>   * Returns false if the @address@hidden pair was not found.
>>   */
>> diff --git a/util/qht.c b/util/qht.c
>> index 40d6e21..d38948e 100644
>> --- a/util/qht.c
>> +++ b/util/qht.c
>> @@ -445,7 +445,11 @@ void *qht_do_lookup(struct qht_bucket *head, 
>> qht_lookup_func_t func,
>>      do {
>>          for (i = 0; i < QHT_BUCKET_ENTRIES; i++) {
>>              if (b->hashes[i] == hash) {
>> -                void *p = atomic_read(&b->pointers[i]);
>> +                /* The pointer is dereferenced before seqlock_read_retry,
>> +                 * so (unlike qht_insert__locked) we need to use
>> +                 * atomic_rcu_read here.
>> +                 */
>> +                void *p = atomic_rcu_read(&b->pointers[i]);
>>  
>>                  if (likely(p) && likely(func(p, userp))) {
>>                      return p;
>> @@ -535,6 +539,7 @@ static bool qht_insert__locked(struct qht *ht, struct 
>> qht_map *map,
>>          atomic_rcu_set(&prev->next, b);
>>      }
>>      b->hashes[i] = hash;
>> +    /* smp_wmb() implicit in seqlock_write_begin.  */
>>      atomic_set(&b->pointers[i], p);
>>      seqlock_write_end(&head->sequence);
>>      return true;
>> @@ -659,6 +664,9 @@ bool qht_remove__locked(struct qht_map *map, struct 
>> qht_bucket *head,
>>              }
>>              if (q == p) {
>>                  qht_debug_assert(b->hashes[i] == hash);
>> +                /* seqlock_write_begin and seqlock_write_end provide write
>> +                 * memory barrier semantics to callers of qht_remove.
>> +                 */
>>                  seqlock_write_begin(&head->sequence);
>>                  qht_bucket_remove_entry(b, i);
>>                  seqlock_write_end(&head->sequence);
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]