qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v16 3/7] KVM: arm64: Introduce MTE VM feature


From: Marc Zyngier
Subject: Re: [PATCH v16 3/7] KVM: arm64: Introduce MTE VM feature
Date: Mon, 21 Jun 2021 10:01:00 +0100
User-agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (Gojō) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO)

On Fri, 18 Jun 2021 14:28:22 +0100,
Steven Price <steven.price@arm.com> wrote:
> 
> Add a new VM feature 'KVM_ARM_CAP_MTE' which enables memory tagging
> for a VM. This will expose the feature to the guest and automatically
> tag memory pages touched by the VM as PG_mte_tagged (and clear the tag
> storage) to ensure that the guest cannot see stale tags, and so that
> the tags are correctly saved/restored across swap.
> 
> Actually exposing the new capability to user space happens in a later
> patch.
> 
> Signed-off-by: Steven Price <steven.price@arm.com>
> ---
>  arch/arm64/include/asm/kvm_emulate.h |  3 ++
>  arch/arm64/include/asm/kvm_host.h    |  3 ++
>  arch/arm64/kvm/hyp/exception.c       |  3 +-
>  arch/arm64/kvm/mmu.c                 | 62 +++++++++++++++++++++++++++-
>  arch/arm64/kvm/sys_regs.c            |  7 ++++
>  include/uapi/linux/kvm.h             |  1 +
>  6 files changed, 77 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h 
> b/arch/arm64/include/asm/kvm_emulate.h
> index f612c090f2e4..6bf776c2399c 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -84,6 +84,9 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
>       if (cpus_have_const_cap(ARM64_MISMATCHED_CACHE_TYPE) ||
>           vcpu_el1_is_32bit(vcpu))
>               vcpu->arch.hcr_el2 |= HCR_TID2;
> +
> +     if (kvm_has_mte(vcpu->kvm))
> +             vcpu->arch.hcr_el2 |= HCR_ATA;
>  }
>  
>  static inline unsigned long *vcpu_hcr(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/include/asm/kvm_host.h 
> b/arch/arm64/include/asm/kvm_host.h
> index 7cd7d5c8c4bc..afaa5333f0e4 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -132,6 +132,8 @@ struct kvm_arch {
>  
>       u8 pfr0_csv2;
>       u8 pfr0_csv3;
> +     /* Memory Tagging Extension enabled for the guest */
> +     bool mte_enabled;
>  };
>  
>  struct kvm_vcpu_fault_info {
> @@ -769,6 +771,7 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
>  #define kvm_arm_vcpu_sve_finalized(vcpu) \
>       ((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
>  
> +#define kvm_has_mte(kvm) (system_supports_mte() && (kvm)->arch.mte_enabled)
>  #define kvm_vcpu_has_pmu(vcpu)                                       \
>       (test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features))
>  
> diff --git a/arch/arm64/kvm/hyp/exception.c b/arch/arm64/kvm/hyp/exception.c
> index 73629094f903..56426565600c 100644
> --- a/arch/arm64/kvm/hyp/exception.c
> +++ b/arch/arm64/kvm/hyp/exception.c
> @@ -112,7 +112,8 @@ static void enter_exception64(struct kvm_vcpu *vcpu, 
> unsigned long target_mode,
>       new |= (old & PSR_C_BIT);
>       new |= (old & PSR_V_BIT);
>  
> -     // TODO: TCO (if/when ARMv8.5-MemTag is exposed to guests)
> +     if (kvm_has_mte(vcpu->kvm))
> +             new |= PSR_TCO_BIT;
>  
>       new |= (old & PSR_DIT_BIT);
>  
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index c5d1f3c87dbd..f5305b7561ad 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -822,6 +822,45 @@ transparent_hugepage_adjust(struct kvm_memory_slot 
> *memslot,
>       return PAGE_SIZE;
>  }
>  
> +/*
> + * The page will be mapped in stage 2 as Normal Cacheable, so the VM will be
> + * able to see the page's tags and therefore they must be initialised first. 
> If
> + * PG_mte_tagged is set, tags have already been initialised.
> + *
> + * The race in the test/set of the PG_mte_tagged flag is handled by:
> + * - preventing VM_SHARED mappings in a memslot with MTE preventing two VMs
> + *   racing to santise the same page
> + * - mmap_lock protects between a VM faulting a page in and the VMM 
> performing
> + *   an mprotect() to add VM_MTE
> + */
> +static int sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
> +                          unsigned long size)
> +{
> +     unsigned long i, nr_pages = size >> PAGE_SHIFT;
> +     struct page *page;
> +
> +     if (!kvm_has_mte(kvm))
> +             return 0;
> +
> +     /*
> +      * pfn_to_online_page() is used to reject ZONE_DEVICE pages
> +      * that may not support tags.
> +      */
> +     page = pfn_to_online_page(pfn);
> +
> +     if (!page)
> +             return -EFAULT;
> +
> +     for (i = 0; i < nr_pages; i++, page++) {
> +             if (!test_bit(PG_mte_tagged, &page->flags)) {
> +                     mte_clear_page_tags(page_address(page));
> +                     set_bit(PG_mte_tagged, &page->flags);
> +             }
> +     }
> +
> +     return 0;
> +}
> +
>  static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>                         struct kvm_memory_slot *memslot, unsigned long hva,
>                         unsigned long fault_status)
> @@ -971,8 +1010,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, 
> phys_addr_t fault_ipa,
>       if (writable)
>               prot |= KVM_PGTABLE_PROT_W;
>  
> -     if (fault_status != FSC_PERM && !device)
> +     if (fault_status != FSC_PERM && !device) {
> +             /* Check the VMM hasn't introduced a new VM_SHARED VMA */
> +             if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED)
> +                     return -EINVAL;

nit: I'd rather we return -EFAULT here. That's consistent with other
cases where we can't satisfy the mapping.

Thanks,

        M.

-- 
Without deviation from the norm, progress is not possible.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]