qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 3/5] hw/arm/virt: Honor highmem setting when computing the


From: Marc Zyngier
Subject: Re: [PATCH v3 3/5] hw/arm/virt: Honor highmem setting when computing the memory map
Date: Thu, 06 Jan 2022 21:26:34 +0000
User-agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (Gojō) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO)

On Wed, 05 Jan 2022 09:22:39 +0000,
Eric Auger <eric.auger@redhat.com> wrote:
> 
> Hi Marc,
> 
> On 12/27/21 10:16 PM, Marc Zyngier wrote:
> > Even when the VM is configured with highmem=off, the highest_gpa
> > field includes devices that are above the 4GiB limit.
> > Similarily, nothing seem to check that the memory is within
> > the limit set by the highmem=off option.
> >
> > This leads to failures in virt_kvm_type() on systems that have
> > a crippled IPA range, as the reported IPA space is larger than
> > what it should be.
> >
> > Instead, honor the user-specified limit to only use the devices
> > at the lowest end of the spectrum, and fail if we have memory
> > crossing the 4GiB limit.
> >
> > Reviewed-by: Andrew Jones <drjones@redhat.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  hw/arm/virt.c | 9 ++++++++-
> >  1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> > index 8b600d82c1..84dd3b36fb 100644
> > --- a/hw/arm/virt.c
> > +++ b/hw/arm/virt.c
> > @@ -1678,6 +1678,11 @@ static void virt_set_memmap(VirtMachineState *vms)
> >          exit(EXIT_FAILURE);
> >      }
> >  
> > +    if (!vms->highmem &&
> > +        vms->memmap[VIRT_MEM].base + ms->maxram_size > 4 * GiB) {
> > +        error_report("highmem=off, but memory crosses the 4GiB limit\n");
> > +        exit(EXIT_FAILURE);
> 
> The memory is composed of initial memory and device memory.
> device memory is put after the initial memory but has a 1GB alignment
> On top of that you have 1G page alignment per device memory slot
> 
> so potentially the highest mem address is larger than
> vms->memmap[VIRT_MEM].base + ms->maxram_size.
> I would rather do the check on device_memory_base + device_memory_size

Yup, that's a good point.

There is also a corner case in one of the later patches where I check
this limit against the PA using the rounded-up device_memory_size.
This could result in returning an error if the last memory slot would
still fit in the PA space, but the rounded-up quantity wouldn't. I
don't think it matters much, but I'll fix it anyway.

> > +    }
> >      /*
> >       * We compute the base of the high IO region depending on the
> >       * amount of initial and device memory. The device memory start/size
> > @@ -1707,7 +1712,9 @@ static void virt_set_memmap(VirtMachineState *vms)
> >          vms->memmap[i].size = size;
> >          base += size;
> >      }
> > -    vms->highest_gpa = base - 1;
> > +    vms->highest_gpa = (vms->highmem ?
> > +                        base :
> > +                        vms->memmap[VIRT_MEM].base + ms->maxram_size) - 1;
> As per the previous comment this looks wrong to me if !highmem.

Agreed.

> If !highmem, if RAM requirements are low we still could get benefit from
> REDIST2 and HIGH ECAM which could fit within the 4GB limit. But maybe we
> simply don't care?

I don't see how. These devices live at a minimum of 256GB, which
contradicts the very meaning of !highmem being a 4GB limit.

> If we don't, why don't we simply skip the extended_memmap overlay as
> suggested in v2? I did not get your reply sorry.

Because although this makes sense if you only care about a 32bit
limit, we eventually want to check against an arbitrary PA limit and
enable the individual devices that do fit in that space.

In order to do that, we need to compute the base addresses for these
extra devices. Also, computing 3 base addresses isn't going to be
massively expensive.

Thanks,

        M.

-- 
Without deviation from the norm, progress is not possible.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]