[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 50/57] target/arm: Complete TBI clearing for user-only for SVE
From: |
Peter Maydell |
Subject: |
[PULL 50/57] target/arm: Complete TBI clearing for user-only for SVE |
Date: |
Fri, 26 Jun 2020 16:14:17 +0100 |
From: Richard Henderson <richard.henderson@linaro.org>
There are a number of paths by which the TBI is still intact
for user-only in the SVE helpers.
Because we currently always set TBI for user-only, we do not
need to pass down the actual TBI setting from above, and we
can remove the top byte in the inner-most primitives, so that
none are forgotten. Moreover, this keeps the "dirty" pointer
around at the higher levels, where we need it for any MTE checking.
Since the normal case, especially for user-only, goes through
RAM, this clearing merely adds two insns per page lookup, which
will be completely in the noise.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200626033144.790098-39-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/cpu.c | 3 +++
target/arm/sve_helper.c | 19 +++++++++++++++++--
target/arm/translate-a64.c | 5 +++++
3 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index d9876337c05..afe81e9b6c0 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -203,6 +203,9 @@ static void arm_cpu_reset(DeviceState *dev)
* Enable TBI0 and TBI1. While the real kernel only enables TBI0,
* turning on both here will produce smaller code and otherwise
* make no difference to the user-level emulation.
+ *
+ * In sve_probe_page, we assume that this is set.
+ * Do not modify this without other changes.
*/
env->cp15.tcr_el[1].raw_tcr = (3ULL << 37);
#else
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index ad974c2cc57..382fa82bc8a 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -3966,14 +3966,16 @@ static void sve_##NAME##_host(void *vd, intptr_t
reg_off, void *host) \
static void sve_##NAME##_tlb(CPUARMState *env, void *vd, intptr_t reg_off, \
target_ulong addr, uintptr_t ra) \
{ \
- *(TYPEE *)(vd + H(reg_off)) = (TYPEM)TLB(env, addr, ra); \
+ *(TYPEE *)(vd + H(reg_off)) = \
+ (TYPEM)TLB(env, useronly_clean_ptr(addr), ra); \
}
#define DO_ST_TLB(NAME, H, TYPEE, TYPEM, TLB) \
static void sve_##NAME##_tlb(CPUARMState *env, void *vd, intptr_t reg_off, \
target_ulong addr, uintptr_t ra) \
{ \
- TLB(env, addr, (TYPEM)*(TYPEE *)(vd + H(reg_off)), ra); \
+ TLB(env, useronly_clean_ptr(addr), \
+ (TYPEM)*(TYPEE *)(vd + H(reg_off)), ra); \
}
#define DO_LD_PRIM_1(NAME, H, TE, TM) \
@@ -4091,6 +4093,19 @@ static bool sve_probe_page(SVEHostPage *info, bool
nofault,
int flags;
addr += mem_off;
+
+ /*
+ * User-only currently always issues with TBI. See the comment
+ * above useronly_clean_ptr. Usually we clean this top byte away
+ * during translation, but we can't do that for e.g. vector + imm
+ * addressing modes.
+ *
+ * We currently always enable TBI for user-only, and do not provide
+ * a way to turn it off. So clean the pointer unconditionally here,
+ * rather than look it up here, or pass it down from above.
+ */
+ addr = useronly_clean_ptr(addr);
+
flags = probe_access_flags(env, addr, access_type, mmu_idx, nofault,
&info->host, retaddr);
info->flags = flags;
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index e46c4a49e00..c20af6ee9d0 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -14634,6 +14634,11 @@ static void
aarch64_tr_init_disas_context(DisasContextBase *dcbase,
dc->features = env->features;
dc->dcz_blocksize = arm_cpu->dcz_blocksize;
+#ifdef CONFIG_USER_ONLY
+ /* In sve_probe_page, we assume TBI is enabled. */
+ tcg_debug_assert(dc->tbid & 1);
+#endif
+
/* Single step state. The code-generation logic here is:
* SS_ACTIVE == 0:
* generate code with no special handling for single-stepping (except
--
2.20.1
- [PULL 41/57] target/arm: Use mte_checkN for sve unpredicated stores, (continued)
- [PULL 41/57] target/arm: Use mte_checkN for sve unpredicated stores, Peter Maydell, 2020/06/26
- [PULL 38/57] target/arm: Implement helper_mte_checkN, Peter Maydell, 2020/06/26
- [PULL 40/57] target/arm: Use mte_checkN for sve unpredicated loads, Peter Maydell, 2020/06/26
- [PULL 42/57] target/arm: Use mte_check1 for sve LD1R, Peter Maydell, 2020/06/26
- [PULL 43/57] target/arm: Tidy trans_LD1R_zpri, Peter Maydell, 2020/06/26
- [PULL 44/57] target/arm: Add arm_tlb_bti_gp, Peter Maydell, 2020/06/26
- [PULL 48/57] target/arm: Handle TBI for sve scalar + int memory ops, Peter Maydell, 2020/06/26
- [PULL 39/57] target/arm: Add helper_mte_check_zva, Peter Maydell, 2020/06/26
- [PULL 45/57] target/arm: Add mte helpers for sve scalar + int loads, Peter Maydell, 2020/06/26
- [PULL 51/57] target/arm: Implement data cache set allocation tags, Peter Maydell, 2020/06/26
- [PULL 50/57] target/arm: Complete TBI clearing for user-only for SVE,
Peter Maydell <=
- [PULL 46/57] target/arm: Add mte helpers for sve scalar + int stores, Peter Maydell, 2020/06/26
- [PULL 47/57] target/arm: Add mte helpers for sve scalar + int ff/nf loads, Peter Maydell, 2020/06/26
- [PULL 52/57] target/arm: Set PSTATE.TCO on exception entry, Peter Maydell, 2020/06/26
- [PULL 49/57] target/arm: Add mte helpers for sve scatter/gather memory ops, Peter Maydell, 2020/06/26
- [PULL 53/57] target/arm: Always pass cacheattr to get_phys_addr, Peter Maydell, 2020/06/26
- [PULL 54/57] target/arm: Cache the Tagged bit for a page in MemTxAttrs, Peter Maydell, 2020/06/26
- [PULL 55/57] target/arm: Create tagged ram when MTE is enabled, Peter Maydell, 2020/06/26
- [PULL 56/57] target/arm: Add allocation tag storage for system mode, Peter Maydell, 2020/06/26
- [PULL 57/57] target/arm: Enable MTE, Peter Maydell, 2020/06/26