[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v3 39/66] target/s390x: Use cpu_*_mmu instead of helper_*_mmu
From: |
Richard Henderson |
Subject: |
[PATCH v3 39/66] target/s390x: Use cpu_*_mmu instead of helper_*_mmu |
Date: |
Wed, 18 Aug 2021 09:18:53 -1000 |
The helper_*_mmu functions were the only thing available
when this code was written. This could have been adjusted
when we added cpu_*_mmuidx_ra, but now we can most easily
use the newest set of interfaces.
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/s390x/tcg/mem_helper.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/target/s390x/tcg/mem_helper.c b/target/s390x/tcg/mem_helper.c
index b20a82a914..4115cadbd7 100644
--- a/target/s390x/tcg/mem_helper.c
+++ b/target/s390x/tcg/mem_helper.c
@@ -248,13 +248,13 @@ static void do_access_memset(CPUS390XState *env, vaddr
vaddr, char *haddr,
* page. This is especially relevant to speed up TLB_NOTDIRTY.
*/
g_assert(size > 0);
- helper_ret_stb_mmu(env, vaddr, byte, oi, ra);
+ cpu_stb_mmu(env, vaddr, byte, oi, ra);
haddr = tlb_vaddr_to_host(env, vaddr, MMU_DATA_STORE, mmu_idx);
if (likely(haddr)) {
memset(haddr + 1, byte, size - 1);
} else {
for (i = 1; i < size; i++) {
- helper_ret_stb_mmu(env, vaddr + i, byte, oi, ra);
+ cpu_stb_mmu(env, vaddr + i, byte, oi, ra);
}
}
}
@@ -290,7 +290,7 @@ static uint8_t do_access_get_byte(CPUS390XState *env, vaddr
vaddr, char **haddr,
* Do a single access and test if we can then get access to the
* page. This is especially relevant to speed up TLB_NOTDIRTY.
*/
- byte = helper_ret_ldub_mmu(env, vaddr + offset, oi, ra);
+ byte = cpu_ldb_mmu(env, vaddr + offset, oi, ra);
*haddr = tlb_vaddr_to_host(env, vaddr, MMU_DATA_LOAD, mmu_idx);
return byte;
#endif
@@ -324,7 +324,7 @@ static void do_access_set_byte(CPUS390XState *env, vaddr
vaddr, char **haddr,
* Do a single access and test if we can then get access to the
* page. This is especially relevant to speed up TLB_NOTDIRTY.
*/
- helper_ret_stb_mmu(env, vaddr + offset, byte, oi, ra);
+ cpu_stb_mmu(env, vaddr + offset, byte, oi, ra);
*haddr = tlb_vaddr_to_host(env, vaddr, MMU_DATA_STORE, mmu_idx);
#endif
}
--
2.25.1
- [PATCH v3 36/66] accel/tcg: Move cpu_atomic decls to exec/cpu_ldst.h, (continued)
- [PATCH v3 36/66] accel/tcg: Move cpu_atomic decls to exec/cpu_ldst.h, Richard Henderson, 2021/08/18
- [PATCH v3 26/66] trace/mem: Pass MemOpIdx to trace_mem_get_info, Richard Henderson, 2021/08/18
- [PATCH v3 28/66] plugins: Reorg arguments to qemu_plugin_vcpu_mem_cb, Richard Henderson, 2021/08/18
- [PATCH v3 30/66] target/arm: Use MO_128 for 16 byte atomics, Richard Henderson, 2021/08/18
- [PATCH v3 37/66] target/mips: Use cpu_*_data_ra for msa load/store, Richard Henderson, 2021/08/18
- [PATCH v3 33/66] target/s390x: Use MO_128 for 16 byte atomics, Richard Henderson, 2021/08/18
- [PATCH v3 34/66] target/hexagon: Implement cpu_mmu_index, Richard Henderson, 2021/08/18
- [PATCH v3 31/66] target/i386: Use MO_128 for 16 byte atomics, Richard Henderson, 2021/08/18
- [PATCH v3 38/66] target/mips: Use 8-byte memory ops for msa load/store, Richard Henderson, 2021/08/18
- [PATCH v3 39/66] target/s390x: Use cpu_*_mmu instead of helper_*_mmu,
Richard Henderson <=
- [PATCH v3 40/66] target/sparc: Use cpu_*_mmu instead of helper_*_mmu, Richard Henderson, 2021/08/18
- [PATCH v3 41/66] target/arm: Use cpu_*_mmu instead of helper_*_mmu, Richard Henderson, 2021/08/18
- [PATCH v3 42/66] tcg: Move helper_*_mmu decls to tcg/tcg-ldst.h, Richard Henderson, 2021/08/18
- [PATCH v3 43/66] tcg: Add helper_unaligned_{ld, st} for user-only sigbus, Richard Henderson, 2021/08/18
- [PATCH v3 45/66] tests/tcg/multiarch: Add sigbus.c, Richard Henderson, 2021/08/18
- [PATCH v3 44/66] tcg/i386: Support raising sigbus for user-only, Richard Henderson, 2021/08/18
- [PATCH v3 47/66] linux-user: Disable more prctl subcodes, Richard Henderson, 2021/08/18