qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 29/31] target/s390x: Use atomic operations for C


From: Richard Henderson
Subject: Re: [Qemu-devel] [PATCH 29/31] target/s390x: Use atomic operations for COMPARE SWAP PURGE
Date: Tue, 23 May 2017 09:31:29 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.0

On 05/23/2017 05:28 AM, Aurelien Jarno wrote:
On 2017-05-22 20:03, Richard Henderson wrote:
+/* flush global tlb */
+void HELPER(purge)(CPUS390XState *env)
+{
+    S390CPU *cpu = s390_env_get_cpu(env);
+
+    tlb_flush_all_cpus(CPU(cpu));

From what I understand from the PoP, the instruction should not complete
before the TLB has been purged on all CPUs. Therefore I guess
tlb_flush_all_cpus_synced() should be used instead.
I don't read that from this:

# (1) all specified entries have been cleared
# from the ALB and TLB of this CPU and

# (2) all other
# CPUs in the configuration have completed any stor-
# age accesses, including the updating of the change
# and reference bits, by using the specified ALB and
# TLB entries.

It talks about referenced bits being updated -- presumably before the tlb entry is flushed. But it doesn't say "all specified ALB and TLB entries of other CPUs in the configuration".

But if you still disagree, it's certainly an easy change as you note.


+    tcg_gen_atomic_cmpxchg_i64(old, addr, o->in1, o->out2,

Here the prep generator took the 32-bit version of in1. I guess the same
should be done for out2.

No, in1 is zero-extended for its use ...


+                               get_mem_index(s), mop | MO_ALIGN);
+    tcg_temp_free_i64(addr);
+
+    /* Are the memory and expected values (un)equal?  */
+    cc = tcg_temp_new_i64();
+    tcg_gen_setcond_i64(TCG_COND_NE, cc, o->in1, old);

... here.

For out2 above, cmpxchg acts as any other store wrt MO_TEUL, in that it ignores the unused upper bits.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]