qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC v1 3/3] target/ppc: Generate fence operation


From: Nikunj A Dadhania
Subject: Re: [Qemu-devel] [PATCH RFC v1 3/3] target/ppc: Generate fence operations
Date: Fri, 07 Apr 2017 10:51:04 +0530
User-agent: Notmuch/0.23.5 (https://notmuchmail.org) Emacs/25.1.1 (x86_64-redhat-linux-gnu)

Richard Henderson <address@hidden> writes:

> On 04/06/2017 03:22 AM, Nikunj A Dadhania wrote:
>> @@ -3028,6 +3030,7 @@ static void gen_##name(DisasContext *ctx)              
>>               \
>>      tcg_gen_qemu_ld_tl(gpr, t0, ctx->mem_idx, memop);                \
>>      tcg_gen_mov_tl(cpu_reserve, t0);                                 \
>>      tcg_gen_mov_tl(cpu_reserve_val, gpr);                            \
>> +    tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);                           \
>
> Please put the barrier next to the load.
> I hope we can merge these soon.

Sure

>
>> @@ -3196,6 +3199,7 @@ static void gen_##name(DisasContext *ctx)              
>>      \
>>      if (len > 1) {                                          \
>>          gen_check_align(ctx, t0, (len) - 1);                \
>>      }                                                       \
>> +    tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);                  \
>>      gen_conditional_store(ctx, t0, rS(ctx->opcode), memop); \
>>      tcg_temp_free(t0);                                      \
>>  }
>
> This one is more complicated...
>
> The success path includes an atomic_cmpxchg, which itself is a SC barrier. 
> However, the fail path branches across the cmpxchg and so sees no barrier, 
> but 
> one is still required by the architecture.  How about a gen_conditional_store 
> that looks like this:
>
> {
>    TCGLabel *l1 = gen_new_label();
>    TCGLabel *l2 = gen_new_label();
>    TCGv t0;
>
>    tcg_gen_brcond_tl(TCG_COND_NE, EA, cpu_reserve, l1);
>
>    t0 = tcg_temp_new();
>    tcg_gen_atomic_cmpxchg_tl(t0, cpu_reserve, cpu_reserve_val,
>                              cpu_gpr[reg], ctx->mem_idx,
>                              DEF_MEMOP(memop) | MO_ALIGN);
>    tcg_gen_setcond_tl(TCG_COND_EQ, t0, t0, cpu_reserve_val);
>    tcg_gen_shli_tl(t0, t0, CRF_EQ_BIT);
>    tcg_gen_or_tl(t0, t0, cpu_so);
>    tcg_gen_trunc_tl(cpu_crf[0], t0);
>    tcg_temp_free(t0);
>    tcg_gen_br(l2);
>
>    gen_set_label(l1);
>
>    /* Address mismatch implies failure.  But we still need to provide the
>       memory barrier semantics of the instruction.  */
>    tcg_gen_mb(TCG_MO_ALL | TCG_BAR_STRL);
>    tcg_gen_trunc_tl(cpu_crf[0], cpu_so);
>
>    gen_set_label(l2);
>    tcg_gen_movi_tl(cpu_reserve, -1);
> }
>
> Note that you don't need to reset cpu_reserve_val at all.

Sure.

> I just thought of something we might need to check for this and other ll/sc 
> implemetations -- do we need to check for address misalignment along the 
> address comparison failure path?

We do that in the macro:

      if (len > 1) {                                          \
          gen_check_align(ctx, t0, (len) - 1);                \
      }                                                       \

Would we still need a barrier before the alignment check?

> I suspect that we do.

Regards
Nikunj




reply via email to

[Prev in Thread] Current Thread [Next in Thread]