[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH 27/31] target/arm: Tidy handle_vec_simd_shri
From: |
Richard Henderson |
Subject: |
[PATCH 27/31] target/arm: Tidy handle_vec_simd_shri |
Date: |
Thu, 26 Mar 2020 16:08:34 -0700 |
Now that we've converted all cases to gvec, there is quite a bit
of dead code at the end of the function. Remove it.
Sink the call to gen_gvec_fn2i to the end, loading a function
pointer within the switch statement.
Signed-off-by: Richard Henderson <address@hidden>
---
target/arm/translate-a64.c | 56 ++++++++++----------------------------
1 file changed, 14 insertions(+), 42 deletions(-)
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index f7d492cce4..fc156a217a 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -11096,16 +11096,7 @@ static void handle_vec_simd_shri(DisasContext *s, bool
is_q, bool is_u,
int size = 32 - clz32(immh) - 1;
int immhb = immh << 3 | immb;
int shift = 2 * (8 << size) - immhb;
- bool accumulate = false;
- int dsize = is_q ? 128 : 64;
- int esize = 8 << size;
- int elements = dsize/esize;
- MemOp memop = size | (is_u ? 0 : MO_SIGN);
- TCGv_i64 tcg_rn = new_tmp_a64(s);
- TCGv_i64 tcg_rd = new_tmp_a64(s);
- TCGv_i64 tcg_round;
- uint64_t round_const;
- int i;
+ GVecGen2iFn *gvec_fn;
if (extract32(immh, 3, 1) && !is_q) {
unallocated_encoding(s);
@@ -11119,13 +11110,12 @@ static void handle_vec_simd_shri(DisasContext *s,
bool is_q, bool is_u,
switch (opcode) {
case 0x02: /* SSRA / USRA (accumulate) */
- gen_gvec_fn2i(s, is_q, rd, rn, shift,
- is_u ? arm_gen_gvec_usra : arm_gen_gvec_ssra, size);
- return;
+ gvec_fn = is_u ? arm_gen_gvec_usra : arm_gen_gvec_ssra;
+ break;
case 0x08: /* SRI */
- gen_gvec_fn2i(s, is_q, rd, rn, shift, arm_gen_gvec_sri, size);
- return;
+ gvec_fn = arm_gen_gvec_sri;
+ break;
case 0x00: /* SSHR / USHR */
if (is_u) {
@@ -11133,49 +11123,31 @@ static void handle_vec_simd_shri(DisasContext *s,
bool is_q, bool is_u,
/* Shift count the same size as element size produces zero. */
tcg_gen_gvec_dup8i(vec_full_reg_offset(s, rd),
is_q ? 16 : 8, vec_full_reg_size(s), 0);
- } else {
- gen_gvec_fn2i(s, is_q, rd, rn, shift, tcg_gen_gvec_shri, size);
+ return;
}
+ gvec_fn = tcg_gen_gvec_shri;
} else {
/* Shift count the same size as element size produces all sign. */
if (shift == 8 << size) {
shift -= 1;
}
- gen_gvec_fn2i(s, is_q, rd, rn, shift, tcg_gen_gvec_sari, size);
+ gvec_fn = tcg_gen_gvec_sari;
}
- return;
+ break;
case 0x04: /* SRSHR / URSHR (rounding) */
- gen_gvec_fn2i(s, is_q, rd, rn, shift,
- is_u ? arm_gen_gvec_urshr : arm_gen_gvec_srshr, size);
- return;
+ gvec_fn = is_u ? arm_gen_gvec_urshr : arm_gen_gvec_srshr;
+ break;
case 0x06: /* SRSRA / URSRA (accum + rounding) */
- gen_gvec_fn2i(s, is_q, rd, rn, shift,
- is_u ? arm_gen_gvec_ursra : arm_gen_gvec_srsra, size);
- return;
+ gvec_fn = is_u ? arm_gen_gvec_ursra : arm_gen_gvec_srsra;
+ break;
default:
g_assert_not_reached();
}
- round_const = 1ULL << (shift - 1);
- tcg_round = tcg_const_i64(round_const);
-
- for (i = 0; i < elements; i++) {
- read_vec_element(s, tcg_rn, rn, i, memop);
- if (accumulate) {
- read_vec_element(s, tcg_rd, rd, i, memop);
- }
-
- handle_shri_with_rndacc(tcg_rd, tcg_rn, tcg_round,
- accumulate, is_u, size, shift);
-
- write_vec_element(s, tcg_rd, rd, i, size);
- }
- tcg_temp_free_i64(tcg_round);
-
- clear_vec_high(s, is_q, rd);
+ gen_gvec_fn2i(s, is_q, rd, rn, shift, gvec_fn, size);
}
/* SHL/SLI - Vector shift left */
--
2.20.1
- [PATCH 15/31] target/arm: Implement PMULLB and PMULLT, (continued)
- [PATCH 15/31] target/arm: Implement PMULLB and PMULLT, Richard Henderson, 2020/03/26
- [PATCH 16/31] target/arm: Tidy SVE tszimm shift formats, Richard Henderson, 2020/03/26
- [PATCH 17/31] target/arm: Implement SVE2 bitwise shift left long, Richard Henderson, 2020/03/26
- [PATCH 18/31] target/arm: Implement SVE2 bitwise exclusive-or interleaved, Richard Henderson, 2020/03/26
- [PATCH 19/31] target/arm: Implement SVE2 bitwise permute, Richard Henderson, 2020/03/26
- [PATCH 20/31] target/arm: Implement SVE2 complex integer add, Richard Henderson, 2020/03/26
- [PATCH 22/31] target/arm: Implement SVE2 integer add/subtract long with carry, Richard Henderson, 2020/03/26
- [PATCH 23/31] target/arm: Create arm_gen_gvec_[us]sra, Richard Henderson, 2020/03/26
- [PATCH 21/31] target/arm: Implement SVE2 integer absolute difference and accumulate long, Richard Henderson, 2020/03/26
- [PATCH 24/31] target/arm: Create arm_gen_gvec_{u,s}{rshr,rsra}, Richard Henderson, 2020/03/26
- [PATCH 27/31] target/arm: Tidy handle_vec_simd_shri,
Richard Henderson <=
- [PATCH 29/31] target/arm: Vectorize SABD/UABD, Richard Henderson, 2020/03/26
- [PATCH 25/31] target/arm: Implement SVE2 bitwise shift right and accumulate, Richard Henderson, 2020/03/26
- [PATCH 28/31] target/arm: Implement SVE2 bitwise shift and insert, Richard Henderson, 2020/03/26
- [PATCH 26/31] target/arm: Create arm_gen_gvec_{sri,sli}, Richard Henderson, 2020/03/26
- [PATCH 30/31] target/arm: Vectorize SABA/UABA, Richard Henderson, 2020/03/26
- [PATCH 31/31] target/arm: Implement SVE2 integer absolute difference and accumulate, Richard Henderson, 2020/03/26