qemu-stable
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] target/i386: Fix physical address truncation when PAE is ena


From: Richard Henderson
Subject: Re: [PATCH] target/i386: Fix physical address truncation when PAE is enabled
Date: Wed, 20 Dec 2023 15:22:53 +1100
User-agent: Mozilla Thunderbird

On 12/18/23 23:56, Michael Brown wrote:
The address translation logic in get_physical_address() will currently
truncate physical addresses to 32 bits unless long mode is enabled.
This is incorrect when using physical address extensions (PAE) outside
of long mode, with the result that a 32-bit operating system using PAE
to access memory above 4G will experience undefined behaviour.

The truncation code was originally introduced in commit 33dfdb5 ("x86:
only allow real mode to access 32bit without LMA"), where it applied
only to translations performed while paging is disabled (and so cannot
affect guests using PAE).

Commit 9828198 ("target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDX")
rearranged the code such that the truncation also applied to the use
of MMU_PHYS_IDX and MMU_NESTED_IDX.  Commit 4a1e9d4 ("target/i386: Use
atomic operations for pte updates") brought this truncation into scope
for page table entry accesses, and is the first commit for which a
Windows 10 32-bit guest will reliably fail to boot if memory above 4G
is present.

Fix by testing for PAE being enabled via the relevant bit in CR4,
instead of testing for long mode being enabled.  PAE must be enabled
as a prerequisite of long mode, and so this is a generalisation of the
current test.

Remove the #ifdef TARGET_X86_64 check since PAE exists in both 32-bit
and 64-bit processors, and both should exhibit the same truncation
behaviour when PAE is disabled.

Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2040
Signed-off-by: Michael Brown <mcb30@ipxe.org>
---
  target/i386/tcg/sysemu/excp_helper.c | 6 ++----
  1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/target/i386/tcg/sysemu/excp_helper.c 
b/target/i386/tcg/sysemu/excp_helper.c
index 5b86f439ad..3d0d0d78d7 100644
--- a/target/i386/tcg/sysemu/excp_helper.c
+++ b/target/i386/tcg/sysemu/excp_helper.c
@@ -582,12 +582,10 @@ static bool get_physical_address(CPUX86State *env, vaddr 
addr,
/* Translation disabled. */
      out->paddr = addr & x86_get_a20_mask(env);
-#ifdef TARGET_X86_64
-    if (!(env->hflags & HF_LMA_MASK)) {
-        /* Without long mode we can only address 32bits in real mode */
+    if (!(env->cr[4] & CR4_PAE_MASK)) {
+        /* Without PAE we can address only 32 bits */
          out->paddr = (uint32_t)out->paddr;
      }
-#endif

This is not the correct refactoring.

I agree that what we're currently doing is wrong, esp for MMU_PHYS_IDX, but for the default case, if CR0.PG == 0, then CR4.PAE is ignored (vol 3, section 4.1.1).

I suspect the correct fix is to have MMU_PHYS_IDX pass through the input address unchanged, and it is the responsibility of the higher level paging mmu_idx to truncate physical addresses per PG_MODE_* before recursing.


r~

r~


      out->prot = PAGE_READ | PAGE_WRITE | PAGE_EXEC;
      out->page_size = TARGET_PAGE_SIZE;
      return true;




reply via email to

[Prev in Thread] Current Thread [Next in Thread]