[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 30/80] tcg/loongarch64: Support softmmu unaligned accesses
|
From: |
Richard Henderson |
|
Subject: |
[PULL 30/80] tcg/loongarch64: Support softmmu unaligned accesses |
|
Date: |
Tue, 16 May 2023 12:40:55 -0700 |
Test the final byte of an unaligned access.
Use BSTRINS.D to clear the range of bits, rather than AND.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/loongarch64/tcg-target.c.inc | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc
index 33d8e67513..7d0165349d 100644
--- a/tcg/loongarch64/tcg-target.c.inc
+++ b/tcg/loongarch64/tcg-target.c.inc
@@ -848,7 +848,6 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s,
HostAddress *h,
int fast_ofs = TLB_MASK_TABLE_OFS(mem_index);
int mask_ofs = fast_ofs + offsetof(CPUTLBDescFast, mask);
int table_ofs = fast_ofs + offsetof(CPUTLBDescFast, table);
- tcg_target_long compare_mask;
ldst = new_ldst_label(s);
ldst->is_ld = is_ld;
@@ -872,14 +871,20 @@ static TCGLabelQemuLdst *prepare_host_addr(TCGContext *s,
HostAddress *h,
tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_TMP2, TCG_REG_TMP2,
offsetof(CPUTLBEntry, addend));
- /* We don't support unaligned accesses. */
+ /*
+ * For aligned accesses, we check the first byte and include the alignment
+ * bits within the address. For unaligned access, we check that we don't
+ * cross pages using the address of the last byte of the access.
+ */
if (a_bits < s_bits) {
- a_bits = s_bits;
+ unsigned a_mask = (1u << a_bits) - 1;
+ unsigned s_mask = (1u << s_bits) - 1;
+ tcg_out_addi(s, TCG_TYPE_TL, TCG_REG_TMP1, addr_reg, s_mask - a_mask);
+ } else {
+ tcg_out_mov(s, TCG_TYPE_TL, TCG_REG_TMP1, addr_reg);
}
- /* Clear the non-page, non-alignment bits from the address. */
- compare_mask = (tcg_target_long)TARGET_PAGE_MASK | ((1 << a_bits) - 1);
- tcg_out_movi(s, TCG_TYPE_TL, TCG_REG_TMP1, compare_mask);
- tcg_out_opc_and(s, TCG_REG_TMP1, TCG_REG_TMP1, addr_reg);
+ tcg_out_opc_bstrins_d(s, TCG_REG_TMP1, TCG_REG_ZERO,
+ a_bits, TARGET_PAGE_BITS - 1);
/* Compare masked address with the TLB entry. */
ldst->label_ptr[0] = s->code_ptr;
--
2.34.1
- [PULL 20/80] tcg/mips: Use full load/store helpers in user-only mode, (continued)
- [PULL 20/80] tcg/mips: Use full load/store helpers in user-only mode, Richard Henderson, 2023/05/16
- [PULL 21/80] tcg/s390x: Use full load/store helpers in user-only mode, Richard Henderson, 2023/05/16
- [PULL 22/80] tcg/sparc64: Allocate %g2 as a third temporary, Richard Henderson, 2023/05/16
- [PULL 23/80] tcg/sparc64: Rename tcg_out_movi_imm13 to tcg_out_movi_s13, Richard Henderson, 2023/05/16
- [PULL 25/80] tcg/sparc64: Rename tcg_out_movi_imm32 to tcg_out_movi_u32, Richard Henderson, 2023/05/16
- [PULL 26/80] tcg/sparc64: Split out tcg_out_movi_s32, Richard Henderson, 2023/05/16
- [PULL 24/80] target/sparc64: Remove tcg_out_movi_s13 case from tcg_out_movi_imm32, Richard Henderson, 2023/05/16
- [PULL 28/80] accel/tcg: Remove helper_unaligned_{ld,st}, Richard Henderson, 2023/05/16
- [PULL 27/80] tcg/sparc64: Use standard slow path for softmmu, Richard Henderson, 2023/05/16
- [PULL 29/80] tcg/loongarch64: Check the host supports unaligned accesses, Richard Henderson, 2023/05/16
- [PULL 30/80] tcg/loongarch64: Support softmmu unaligned accesses,
Richard Henderson <=
- [PULL 31/80] tcg/riscv: Support softmmu unaligned accesses, Richard Henderson, 2023/05/16
- [PULL 32/80] tcg: Introduce tcg_target_has_memory_bswap, Richard Henderson, 2023/05/16
- [PULL 34/80] tcg: Introduce tcg_out_movext3, Richard Henderson, 2023/05/16
- [PULL 35/80] tcg: Merge tcg_out_helper_load_regs into caller, Richard Henderson, 2023/05/16
- [PULL 37/80] tcg: Introduce atom_and_align_for_opc, Richard Henderson, 2023/05/16
- [PULL 33/80] tcg: Add INDEX_op_qemu_{ld,st}_i128, Richard Henderson, 2023/05/16
- [PULL 36/80] tcg: Support TCG_TYPE_I128 in tcg_out_{ld, st}_helper_{args, ret}, Richard Henderson, 2023/05/16
- [PULL 39/80] tcg/aarch64: Use atom_and_align_for_opc, Richard Henderson, 2023/05/16
- [PULL 38/80] tcg/i386: Use atom_and_align_for_opc, Richard Henderson, 2023/05/16
- [PULL 40/80] tcg/arm: Use atom_and_align_for_opc, Richard Henderson, 2023/05/16