qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v10 9/9] KVM: Enable and expose KVM_MEM_PRIVATE


From: Chao Peng
Subject: Re: [PATCH v10 9/9] KVM: Enable and expose KVM_MEM_PRIVATE
Date: Tue, 17 Jan 2023 21:12:51 +0800

On Sat, Jan 14, 2023 at 12:01:01AM +0000, Sean Christopherson wrote:
> On Fri, Dec 02, 2022, Chao Peng wrote:
> > @@ -10357,6 +10364,12 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> >  
> >             if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu))
> >                     static_call(kvm_x86_update_cpu_dirty_logging)(vcpu);
> > +
> > +           if (kvm_check_request(KVM_REQ_MEMORY_MCE, vcpu)) {
> > +                   vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN;
> 
> Synthesizing triple fault shutdown is not the right approach.  Even with TDX's
> MCE "architecture" (heavy sarcasm), it's possible that host userspace and the
> guest have a paravirt interface for handling memory errors without killing the
> host.

Agree shutdown is not the correct choice. I see you made below change:

send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva, PAGE_SHIFT, current)

The MCE may happen in any thread than KVM thread, sending siginal to
'current' thread may not be the expected behavior. Also how userspace
can tell is the MCE on the shared page or private page? Do we care?

> 
> > +                   r = 0;
> > +                   goto out;
> > +           }
> >     }
> 
> 
> > @@ -1982,6 +2112,10 @@ int __kvm_set_memory_region(struct kvm *kvm,
> >          !access_ok((void __user *)(unsigned long)mem->userspace_addr,
> >                     mem->memory_size))
> >             return -EINVAL;
> > +   if (mem->flags & KVM_MEM_PRIVATE &&
> > +           (mem->restricted_offset & (PAGE_SIZE - 1) ||
> 
> Align indentation.
> 
> > +            mem->restricted_offset > U64_MAX - mem->memory_size))
> 
> Strongly prefer to use similar logic to existing code that detects wraps:
> 
>               mem->restricted_offset + mem->memory_size < 
> mem->restricted_offset
> 
> This is also where I'd like to add the "gfn is aligned to offset" check, 
> though
> my brain is too fried to figure that out right now.
> 
> > +           return -EINVAL;
> >     if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM)
> >             return -EINVAL;
> >     if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr)
> > @@ -2020,6 +2154,9 @@ int __kvm_set_memory_region(struct kvm *kvm,
> >             if ((kvm->nr_memslot_pages + npages) < kvm->nr_memslot_pages)
> >                     return -EINVAL;
> >     } else { /* Modify an existing slot. */
> > +           /* Private memslots are immutable, they can only be deleted. */
> 
> I'm 99% certain I suggested this, but if we're going to make these memslots
> immutable, then we should straight up disallow dirty logging, otherwise we'll
> end up with a bizarre uAPI.

But in my mind dirty logging will be needed in the very short time, when
live migration gets supported?

> 
> > +           if (mem->flags & KVM_MEM_PRIVATE)
> > +                   return -EINVAL;
> >             if ((mem->userspace_addr != old->userspace_addr) ||
> >                 (npages != old->npages) ||
> >                 ((mem->flags ^ old->flags) & KVM_MEM_READONLY))
> > @@ -2048,10 +2185,28 @@ int __kvm_set_memory_region(struct kvm *kvm,
> >     new->npages = npages;
> >     new->flags = mem->flags;
> >     new->userspace_addr = mem->userspace_addr;
> > +   if (mem->flags & KVM_MEM_PRIVATE) {
> > +           new->restricted_file = fget(mem->restricted_fd);
> > +           if (!new->restricted_file ||
> > +               !file_is_restrictedmem(new->restricted_file)) {
> > +                   r = -EINVAL;
> > +                   goto out;
> > +           }
> > +           new->restricted_offset = mem->restricted_offset;

I see you changed slot->restricted_offset type from loff_t to gfn_t and
used pgoff_t when doing the restrictedmem_bind/unbind(). Using page
index is reasonable KVM internally and sounds simpler than loff_t. But
we also need initialize it to page index here as well as changes in
another two cases. This is needed when restricted_offset != 0.

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 547b92215002..49e375e78f30 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -2364,8 +2364,7 @@ static inline int kvm_restricted_mem_get_pfn(struct 
kvm_memory_slot *slot,
                                             gfn_t gfn, kvm_pfn_t *pfn,
                                             int *order)
 {
-       pgoff_t index = gfn - slot->base_gfn +
-                       (slot->restricted_offset >> PAGE_SHIFT);
+       pgoff_t index = gfn - slot->base_gfn + slot->restricted_offset;
        struct page *page;
        int ret;
 
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 01db35ddd5b3..7439bdcb0d04 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -935,7 +935,7 @@ static bool restrictedmem_range_is_valid(struct 
kvm_memory_slot *slot,
                                         pgoff_t start, pgoff_t end,
                                         gfn_t *gfn_start, gfn_t *gfn_end)
 {
-       unsigned long base_pgoff = slot->restricted_offset >> PAGE_SHIFT;
+       unsigned long base_pgoff = slot->restricted_offset;
 
        if (start > base_pgoff)
                *gfn_start = slot->base_gfn + start - base_pgoff;
@@ -2275,7 +2275,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
                        r = -EINVAL;
                        goto out;
                }
-               new->restricted_offset = mem->restricted_offset;
+               new->restricted_offset = mem->restricted_offset >> PAGE_SHIFT;
        }
 
        r = kvm_set_memslot(kvm, old, new, change);

Chao
> > +   }
> > +
> > +   new->kvm = kvm;
> 
> Set this above, just so that the code flows better.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]