qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] fix lock cmpxchg instruction


From: Wei Li
Subject: Re: [PATCH 2/2] fix lock cmpxchg instruction
Date: Mon, 21 Mar 2022 08:50:32 +0000 (UTC)

>This is better addressed with a movcond:
OK. a movcond is better than a branch. : ) 
I will update in patch v2.

Wei Li


On Monday, March 21, 2022, 03:21:27 AM GMT+8, Richard Henderson <richard.henderson@linaro.org> wrote:


On 3/19/22 09:06, Wei Li wrote:
> For lock cmpxchg, the situation is more complex. After the instruction
> is completed by tcg_gen_atomic_cmpxchg_tl, it needs a branch to judge
> if oldv == cmpv or not. The instruction only touches accumulator when
> oldv != cmpv.
>
> Signed-off-by: Wei Li <lw945lw945@yahoo.com>
> ---
>  target/i386/tcg/translate.c | 5 +++++
>  1 file changed, 5 insertions(+)
>
> diff --git a/target/i386/tcg/translate.c b/target/i386/tcg/translate.c
> index 05be8d08e6..4fd9c03cb7 100644
> --- a/target/i386/tcg/translate.c
> +++ b/target/i386/tcg/translate.c
> @@ -5360,7 +5360,12 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
>                  gen_lea_modrm(env, s, modrm);
>                  tcg_gen_atomic_cmpxchg_tl(oldv, s->A0, cmpv, newv,
>                                            s->mem_index, ot | MO_LE);
> +                label1 = gen_new_label();
> +                gen_extu(ot, oldv);
> +                gen_extu(ot, cmpv);
> +                tcg_gen_brcond_tl(TCG_COND_EQ, oldv, cmpv, label1);
>                  gen_op_mov_reg_v(s, ot, R_EAX, oldv);

This is better addressed with a movcond:

    TCGv temp = tcg_temp_new();
    tcg_gen_mov_tl(temp, cpu_regs[R_EAX]);
    /* Perform the merge into %al or %ax as required by ot. */
    gen_op_mov_reg_v(s, ot, R_EAX, oldv);
    /* Undo the entire modification to %rax if comparison equal. */
    tcg_gen_movcond_tl(TCG_COND_EQ, cpu_regs[R_EAX], oldv, cmpv,
                        cpu_regs[R_EAX], temp);
    tcg_temp_free(temp);



r~

reply via email to

[Prev in Thread] Current Thread [Next in Thread]