qemu-riscv
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] target/riscv/pmp: guard against PMP ranges with a negative size


From: Nicolas Pitre
Subject: [PATCH] target/riscv/pmp: guard against PMP ranges with a negative size
Date: Wed, 15 Jun 2022 17:11:51 -0400 (EDT)

For a TOR entry to match, the stard address must be lower than the end
address. Normally this is always the case, but correct code might still
run into the following scenario:

Initial state:

        pmpaddr3 = 0x2000       pmp3cfg = OFF
        pmpaddr4 = 0x3000       pmp4cfg = TOR

Execution:

        1. write 0x40ff to pmpaddr3
        2. write 0x32ff to pmpaddr4
        3. set pmp3cfg to NAPOT with a read-modify-write on pmpcfg0
        4. set pmp4cfg to NAPOT with a read-modify-write on pmpcfg1

When (2) is emulated, a call to pmp_update_rule() creates a negative
range for pmp4 as pmp4cfg is still set to TOR. And when (3) is emulated,
a call to tlb_flush() is performed, causing pmp_get_tlb_size() to return
a very creatively large TLB size for pmp4. This, in turn, may result in
accesses to non-existent/unitialized memory regions and a fault, so that
(4) ends up never being executed.

This is in m-mode with MPRV unset, meaning that unlocked PMP entries
should have no effect. Therefore such a behavior based on PMP content
is very unexpected.

Make sure no negative PMP range can be created, whether explicitly by
the emulated code or implicitly like the above.

Signed-off-by: Nicolas Pitre <nico@fluxnic.net>

diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 151da3fa08..ea2b67d947 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -167,6 +167,9 @@ void pmp_update_rule_addr(CPURISCVState *env, uint32_t 
pmp_index)
     case PMP_AMATCH_TOR:
         sa = prev_addr << 2; /* shift up from [xx:0] to [xx+2:2] */
         ea = (this_addr << 2) - 1u;
+        if (sa > ea) {
+            sa = ea = 0u;
+        }
         break;
 
     case PMP_AMATCH_NA4:



reply via email to

[Prev in Thread] Current Thread [Next in Thread]