[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH for-6.2 05/34] target/arm: Fix mask handling for MVE narrowing op
From: |
Peter Maydell |
Subject: |
[PATCH for-6.2 05/34] target/arm: Fix mask handling for MVE narrowing operations |
Date: |
Tue, 13 Jul 2021 14:36:57 +0100 |
In the MVE helpers for the narrowing operations (DO_VSHRN and
DO_VSHRN_SAT) we were using the wrong bits of the predicate mask for
the 'top' versions of the insn. This is because the loop works over
the double-sized input elements and shifts the predicate mask by that
many bits each time, but when we write out the half-sized output we
must look at the mask bits for whichever half of the element we are
writing to.
Correct this by shifting the whole mask right by ESIZE bits for the
'top' insns. This allows us also to simplify the saturation bit
checking (where we had noticed that we needed to look at a different
mask bit for the 'top' insn.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
target/arm/mve_helper.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/target/arm/mve_helper.c b/target/arm/mve_helper.c
index 99b4801088f..8cbfd3a8c53 100644
--- a/target/arm/mve_helper.c
+++ b/target/arm/mve_helper.c
@@ -1361,6 +1361,7 @@ DO_VSHLL_ALL(vshllt, true)
TYPE *d = vd; \
uint16_t mask = mve_element_mask(env); \
unsigned le; \
+ mask >>= ESIZE * TOP; \
for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
TYPE r = FN(m[H##LESIZE(le)], shift); \
mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \
@@ -1422,11 +1423,12 @@ static inline int32_t do_sat_bhs(int64_t val, int64_t
min, int64_t max,
uint16_t mask = mve_element_mask(env); \
bool qc = false; \
unsigned le; \
+ mask >>= ESIZE * TOP; \
for (le = 0; le < 16 / LESIZE; le++, mask >>= LESIZE) { \
bool sat = false; \
TYPE r = FN(m[H##LESIZE(le)], shift, &sat); \
mergemask(&d[H##ESIZE(le * 2 + TOP)], r, mask); \
- qc |= sat && (mask & 1 << (TOP * ESIZE)); \
+ qc |= sat & mask & 1; \
} \
if (qc) { \
env->vfp.qc[0] = qc; \
--
2.20.1
- Re: [PATCH for-6.2 01/34] target/arm: Note that we handle VMOVL as a special case of VSHLL, (continued)
- [PATCH for-6.2 03/34] target/arm: Fix MVE VSLI by 0 and VSRI by <dt>, Peter Maydell, 2021/07/13
- [PATCH for-6.2 02/34] target/arm: Print MVE VPR in CPU dumps, Peter Maydell, 2021/07/13
- [PATCH for-6.2 04/34] target/arm: Fix signed VADDV, Peter Maydell, 2021/07/13
- [PATCH for-6.2 06/34] target/arm: Fix 48-bit saturating shifts, Peter Maydell, 2021/07/13
- [PATCH for-6.2 05/34] target/arm: Fix mask handling for MVE narrowing operations,
Peter Maydell <=
- [PATCH for-6.2 09/34] target/arm: Factor out mve_eci_mask(), Peter Maydell, 2021/07/13
- [PATCH for-6.2 07/34] target/arm: Fix calculation of LTP mask when LR is 0, Peter Maydell, 2021/07/13
- [PATCH for-6.2 08/34] target/arm: Fix VPT advance when ECI is non-zero, Peter Maydell, 2021/07/13
- [PATCH for-6.2 13/34] target/arm: Factor out gen_vpst(), Peter Maydell, 2021/07/13