[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 11/15] target/s390x: Use cpu_*_mmu instead of helper_*_mmu
From: |
Richard Henderson |
Subject: |
[PULL 11/15] target/s390x: Use cpu_*_mmu instead of helper_*_mmu |
Date: |
Wed, 13 Oct 2021 11:22:35 -0700 |
The helper_*_mmu functions were the only thing available
when this code was written. This could have been adjusted
when we added cpu_*_mmuidx_ra, but now we can most easily
use the newest set of interfaces.
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
target/s390x/tcg/mem_helper.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/target/s390x/tcg/mem_helper.c b/target/s390x/tcg/mem_helper.c
index 251d4acf55..17e3f83641 100644
--- a/target/s390x/tcg/mem_helper.c
+++ b/target/s390x/tcg/mem_helper.c
@@ -249,13 +249,13 @@ static void do_access_memset(CPUS390XState *env, vaddr
vaddr, char *haddr,
* page. This is especially relevant to speed up TLB_NOTDIRTY.
*/
g_assert(size > 0);
- helper_ret_stb_mmu(env, vaddr, byte, oi, ra);
+ cpu_stb_mmu(env, vaddr, byte, oi, ra);
haddr = tlb_vaddr_to_host(env, vaddr, MMU_DATA_STORE, mmu_idx);
if (likely(haddr)) {
memset(haddr + 1, byte, size - 1);
} else {
for (i = 1; i < size; i++) {
- helper_ret_stb_mmu(env, vaddr + i, byte, oi, ra);
+ cpu_stb_mmu(env, vaddr + i, byte, oi, ra);
}
}
}
@@ -291,7 +291,7 @@ static uint8_t do_access_get_byte(CPUS390XState *env, vaddr
vaddr, char **haddr,
* Do a single access and test if we can then get access to the
* page. This is especially relevant to speed up TLB_NOTDIRTY.
*/
- byte = helper_ret_ldub_mmu(env, vaddr + offset, oi, ra);
+ byte = cpu_ldb_mmu(env, vaddr + offset, oi, ra);
*haddr = tlb_vaddr_to_host(env, vaddr, MMU_DATA_LOAD, mmu_idx);
return byte;
#endif
@@ -325,7 +325,7 @@ static void do_access_set_byte(CPUS390XState *env, vaddr
vaddr, char **haddr,
* Do a single access and test if we can then get access to the
* page. This is especially relevant to speed up TLB_NOTDIRTY.
*/
- helper_ret_stb_mmu(env, vaddr + offset, byte, oi, ra);
+ cpu_stb_mmu(env, vaddr + offset, byte, oi, ra);
*haddr = tlb_vaddr_to_host(env, vaddr, MMU_DATA_STORE, mmu_idx);
#endif
}
--
2.25.1
- [PULL 00/15] tcg patch queue, Richard Henderson, 2021/10/13
- [PULL 01/15] memory: Log access direction for invalid accesses, Richard Henderson, 2021/10/13
- [PULL 03/15] target/i386: Use MO_128 for 16 byte atomics, Richard Henderson, 2021/10/13
- [PULL 02/15] target/arm: Use MO_128 for 16 byte atomics, Richard Henderson, 2021/10/13
- [PULL 05/15] target/s390x: Use MO_128 for 16 byte atomics, Richard Henderson, 2021/10/13
- [PULL 06/15] target/hexagon: Implement cpu_mmu_index, Richard Henderson, 2021/10/13
- [PULL 04/15] target/ppc: Use MO_128 for 16 byte atomics, Richard Henderson, 2021/10/13
- [PULL 08/15] accel/tcg: Move cpu_atomic decls to exec/cpu_ldst.h, Richard Henderson, 2021/10/13
- [PULL 10/15] target/mips: Use 8-byte memory ops for msa load/store, Richard Henderson, 2021/10/13
- [PULL 09/15] target/mips: Use cpu_*_data_ra for msa load/store, Richard Henderson, 2021/10/13
- [PULL 11/15] target/s390x: Use cpu_*_mmu instead of helper_*_mmu,
Richard Henderson <=
- [PULL 13/15] target/arm: Use cpu_*_mmu instead of helper_*_mmu, Richard Henderson, 2021/10/13
- [PULL 12/15] target/sparc: Use cpu_*_mmu instead of helper_*_mmu, Richard Henderson, 2021/10/13
- [PULL 07/15] accel/tcg: Add cpu_{ld,st}*_mmu interfaces, Richard Henderson, 2021/10/13
- [PULL 14/15] tcg: Move helper_*_mmu decls to tcg/tcg-ldst.h, Richard Henderson, 2021/10/13
- [PULL 15/15] tcg: Canonicalize alignment flags in MemOp, Richard Henderson, 2021/10/13
- Re: [PULL 00/15] tcg patch queue, Richard Henderson, 2021/10/13