qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] SMI handler should set the CPL to zero and save


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH] SMI handler should set the CPL to zero and save and restore it on rsm.
Date: Tue, 13 May 2014 20:24:47 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0

Il 27/04/2014 19:25, Kevin O'Connor ha scritto:
> I was wondering about that as well.  The Intel docs state that the CPL
> is bits 0-1 of the CS.selector register, and that protected mode
> starts immediately after setting the PE bit.  The CS.selector field
> should be the value of %cs in real mode, which is the value added to
> eip (after shifting right by 4).
> 
> I guess that means that the real mode code that enables the PE bit
> must run with a code segment aligned to a value of 4.  (Which
> effectively means code alignment of 64 bytes because of the segment
> shift.)

It turns out that this is not a requirement; which means that the 
protected mode transition is exactly the only place where CPL is not 
redundant.  The CPL remains zero until you reload CS with a long jump.

Your patch gets it right because after a CR0 write it doesn't attempt 
to recompute the CPL, but you need the following partial revert in 
order to satisfy virtualization extensions (SVM).  Without it, the
guest will triple fault after setting CR0.PE=1, unless CS's low 2 bits
are 00.  The hypervisor gets a CR0_WRITE vmexit, but then the processor
fails to execute guest code from a non-conforming ring-0 code segment
at CPL>0.

Signed-off-by: Paolo Bonzini <address@hidden>

diff --git a/target-i386/cpu.h b/target-i386/cpu.h
index e9cbdab..478f356 100644
--- a/target-i386/cpu.h
+++ b/target-i386/cpu.h
@@ -124,9 +124,9 @@
 #define ID_MASK                 0x00200000
 
 /* hidden flags - used internally by qemu to represent additional cpu
-   states. Only the INHIBIT_IRQ, SMM and SVMI are not redundant. We
-   avoid using the IOPL_MASK, TF_MASK, VM_MASK and AC_MASK bit
-   positions to ease oring with eflags. */
+   states. Only the CPL, INHIBIT_IRQ, SMM and SVMI are not
+   redundant. We avoid using the IOPL_MASK, TF_MASK, VM_MASK and AC_MASK
+   bit positions to ease oring with eflags. */
 /* current cpl */
 #define HF_CPL_SHIFT         0
 /* true if soft mmu is being used */
@@ -1052,6 +1052,16 @@ int cpu_x86_get_descr_debug(CPUX86State *env, unsigned 
int selector,
                             target_ulong *base, unsigned int *limit,
                             unsigned int *flags);
 
+/* wrapper, just in case memory mappings must be changed */
+static inline void cpu_x86_set_cpl(CPUX86State *s, int cpl)
+{
+#if HF_CPL_MASK == 3
+    s->hflags = (s->hflags & ~HF_CPL_MASK) | cpl;
+#else
+#error HF_CPL_MASK is hardcoded
+#endif
+}
+
 /* op_helper.c */
 /* used for debug or cpu save/restore */
 void cpu_get_fp80(uint64_t *pmant, uint16_t *pexp, floatx80 f);
diff --git a/target-i386/svm_helper.c b/target-i386/svm_helper.c
index 846eaa5..29ca012 100644
--- a/target-i386/svm_helper.c
+++ b/target-i386/svm_helper.c
@@ -282,6 +282,9 @@ void helper_vmrun(CPUX86State *env, int aflag, int 
next_eip_addend)
                           env->vm_vmcb + offsetof(struct vmcb, save.dr7));
     env->dr[6] = ldq_phys(cs->as,
                           env->vm_vmcb + offsetof(struct vmcb, save.dr6));
+    cpu_x86_set_cpl(env, ldub_phys(cs->as,
+                                   env->vm_vmcb + offsetof(struct vmcb,
+                                                           save.cpl)));
 
     /* FIXME: guest state consistency checks */




reply via email to

[Prev in Thread] Current Thread [Next in Thread]