qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] tcg: Initialize return value after exit_atomic


From: Nikunj A Dadhania
Subject: Re: [Qemu-devel] [PATCH] tcg: Initialize return value after exit_atomic
Date: Wed, 26 Apr 2017 16:10:29 +0530
User-agent: Notmuch/0.24.1 (https://notmuchmail.org) Emacs/25.1.1 (x86_64-redhat-linux-gnu)

aNikunj A Dadhania <address@hidden> writes:

> Richard Henderson <address@hidden> writes:
>
>> On 04/25/2017 01:21 PM, Nikunj A Dadhania wrote:
>>> Richard Henderson <address@hidden> writes:
>>> 
>>>> Users of tcg_gen_atomic_cmpxchg and do_atomic_op rightfully utilize
>>>> the output.  Even though this code is dead, it gets translated, and
>>>> without the initialization we encounter a tcg_error.
>>>>
>>>> Reported-by: Nikunj A Dadhania <address@hidden>
>>>> Signed-off-by: Richard Henderson <address@hidden>
>>> 
>>> With this the tcg_error goes away.
>>> 
>>> But then powernv skiboot code [1] enters into infinite loop. Basically,
>>> in target/ppc/translate.c:gen_conditional_store(), setcond_tl will
>>> always fail, and CRF_EQ_BIT will never be set, the lock will never be
>>> taken.
>>
>> The setcond_tl *shouldn't* always fail.
>
> Correct, in fact we never get here it.
>
>> If that's the case, then we have another bug in the !parallel_cpus
>> code path for gen_conditional_store.
>
> Something interesting is happening, I have instrumented the code such
> that I get some prints for load with reservation and store conditional:
>
> First case is the success case for 32bit atomic_cmpxchg.
>
>     $ ./configure --target-list=ppc64-softmmu  --cc=clang --host-cc=clang
>     $ ./ppc64-softmmu/qemu-system-ppc64  -machine powernv,usb=off  -vga none 
> -nographic
> [lwarx]
>     helper_myprint: t0 cafe0000 t1 cafe0000
>     helper_myprint: t0 cafe0001 t1 cafe0001
>     helper_myprint: t0 cafe0002 t1 cafe0002
>     helper_myprint: t0 f0 t1 0
> [stwcx]
>     helper_myprint: t0 dead0000 t1 dead0000
>     helper_myprint: t0 f0 t1 0
>     helper_myprint: t0 dead0001 t1 dead0001
>     helper_myprint: t0 dead0011 t1 dead0011
>     helper_myprint: t0 0 t1 0
> [success as t0 and cpu_reserve_val is same]
>
> [ldarx]
>     helper_myprint: t0 cafe0000 t1 cafe0000
>     helper_myprint: t0 cafe0001 t1 cafe0001
>     helper_myprint: t0 cafe0002 t1 cafe0002
>     helper_myprint: t0 30200018 t1 0
>
> [ cpu_reserve = 30200018, cpu_reserve_val = 0 after load ]
>
> [stdcx]
>     helper_myprint: t0 dead0000 t1 dead0000
>     helper_myprint: t0 30200018 t1 0
>     helper_myprint: t0 dead0001 t1 dead0001
>
> [ That is before atomic_cmpxchg_tl, and suddenly we exit out, we did
> not reach setcond_tl ]
>
>     helper_myprint: t0 dead0000 t1 dead0000
>
> **** [ re-entering gen_store_conditional() ] ****
>
>     helper_myprint: t0 ffffffffffffffff t1 0
>
> **** [ cpu_reserve is corrupted ] ****
>

That is because of the following:

tcg_gen_atomic_cmpxchg_tl()
 -> gen_helper_exit_atomic()
     -> HELPER(exit_atomic)
         -> cpu_loop_exit_atomic() -> EXCP_ATOMIC
             -> qemu_tcg_cpu_thread_fn() => case EXCP_ATOMIC
                 -> cpu_exec_step_atomic()
                     -> cpu_step_atomic()
                         -> cc->cpu_exec_enter() = ppc_cpu_exec_enter()
                            Sets env->reserve_addr = -1;

So when we re-enter back, reserve_addr is rubbed out. After getting rid
of this reset of reserve_addr, I do get ahead a bit and stdcx is
successful.


[ldarx]
    helper_myprint: t0 cafe0000 t1 cafe0000
    helper_myprint: t0 cafe0001 t1 cafe0001
    helper_myprint: t0 cafe0002 t1 cafe0002
    helper_myprint: t0 30200018 t1 0

[stdcx.]

    helper_myprint: t0 dead0000 t1 dead0000
    helper_myprint: t0 30200018 t1 0
    helper_myprint: t0 dead0001 t1 dead0001

[ re-enters ]

    helper_myprint: t0 dead0000 t1 dead0000

[ cpu_reserve valud is intact now ]

    helper_myprint: t0 30200018 t1 0
    helper_myprint: t0 dead0001 t1 dead0001
    helper_myprint: t0 dead0011 t1 dead0011
    helper_myprint: t0 0 t1 0

[ And there is a match ]

But then the code is getting stuck here.

Regards
Nikunj
         




reply via email to

[Prev in Thread] Current Thread [Next in Thread]