qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH RFC 0/6] i386/pc: Fix creation of >= 1Tb guests on AMD system


From: David Edmondson
Subject: Re: [PATCH RFC 0/6] i386/pc: Fix creation of >= 1Tb guests on AMD systems with IOMMU
Date: Wed, 23 Jun 2021 08:40:56 +0100

On Tuesday, 2021-06-22 at 15:16:29 -06, Alex Williamson wrote:

>>         Additionally, an alternative to hardcoded ranges as we do today,
>>         VFIO could advertise the platform valid IOVA ranges without 
>> necessarily
>>         requiring to have a PCI device added in the vfio container. That 
>> would
>>         fetching the valid IOVA ranges from VFIO, rather than hardcoded IOVA
>>         ranges as we do today. But sadly, wouldn't work for older 
>> hypervisors.
>
>
> $ grep -h . /sys/kernel/iommu_groups/*/reserved_regions | sort -u
> 0x00000000fee00000 0x00000000feefffff msi
> 0x000000fd00000000 0x000000ffffffffff reserved
>
> Ideally we might take that into account on all hosts, but of course
> then we run into massive compatibility issues when we consider
> migration.  We run into similar problems when people try to assign
> devices to non-x86 TCG hosts, where the arch doesn't have a natural
> memory hole overlapping the msi range.
>
> The issue here is similar to trying to find a set of supported CPU
> flags across hosts, QEMU only has visibility to the host where it runs,
> an upper level tool needs to be able to pass through information about
> compatibility to all possible migration targets.  Towards that end, we
> should probably have command line options that either allow to specify
> specific usable or reserved GPA address ranges.  For example something
> like:
>       --reserved-mem-ranges=host
>
> Or explicitly:
>
>       --reserved-mem-ranges=13G@1010G,1M@4078M

Would this not naturally be a property of a machine model?

dme.
-- 
Seems I'm not alone at being alone.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]