qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] x86 segment limits enforcement with TCG


From: Richard Henderson
Subject: Re: [Qemu-devel] x86 segment limits enforcement with TCG
Date: Tue, 26 Feb 2019 08:56:30 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0

On 2/25/19 4:32 PM, Stephen Checkoway wrote:
> FWIW, I figured out an approach to this. Essentially, I'm reusing the 
> function which computes linear addresses to enforce not only segment limits 
> (in everything but long mode), but also read/write access (in protected mode).
> 
> Unfortunately, that meant every call to the linear address computation has to 
> be augmented with an access size and whether it's a store or not. The patch 
> is pretty large so I won't include it here, but you can view it at 
> <https://github.com/stevecheckoway/qemu/commit/ac58652efacedc53f3831301ea0326ac8f882b18>.
> 
> If this is something that qemu would like, then I think two additional things 
> are definitely required:
> 1. Tests. make check passes and the firmware I have which necessitated the 
> checks appears to work, but that change touches the almost every 
> guest-memory-accessing x86 instruction.
> 2. This is going to slow down the common case—where segments have a 0 base 
> address and a limit of 0xFFFFFFFF—and there's no real need to do that. It 
> seems like something akin to addseg could be used to decide when to insert 
> the checks. I don't really understand how that works and in my case, segments 
> with nonzero bases and non-0xFFFFFFFF limits are the norm so I didn't 
> investigate that. Something similar could probably be done to omit the 
> writable segment checks.
> 
> Finally, there are some limitations. The amount of memory touched by xsave 
> (and the related instructions) depends on edx:eax. I didn't bother checking 
> that at all. Similarly, there are some MPX instructions that don't do any 
> access checks when they really should. And finally, there's one place that 
> checks for an access size of 8 bytes when, in some cases, it should be 16.
> 
> I'm happy to work to upstream this, if the approach is broadly acceptable and 
> the functionality is desired.

I am happy to have proper segmentation support upstream, but having read
through your patch I think I would approach it differently: I would incorporate
segmentation into the softmmu translation process.

Having many softmmu tlbs, even if unused, used to be expensive, and managing
them difficult.  However, some (very) recent work has reduced that expense.

I would add 6 new MMU_MODES, one for each segment register.  Translation for
these modes would proceed in two stages, just like real segmentation+paging.
So your access checks happen as a part of normal memory accesses.  (We have an
example of two-level translation in target/arm, S1_ptw_translate.)

These new tlbs would need to be flushed on any segment register change, and
with any change to the underlying page tables.  They would need to be flushed
on privilege level changes (or we'd need to add another 6 for ring0).

I would extend the check for HF_ADDSEG_MASK to include 4GB segment limits.
With that, "normal" 32-bit operation would ignore these new tlbs and continue
to use the current flat view of the virtual address space.

That should all mean no slow down in the common case, not having to adjust
every single memory access in target/i386/translate.c, and fewer runtime calls
to helper functions when segmentation is in effect.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]