qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] hw/arm/virt enable support for virtio-mem


From: Jonathan Cameron
Subject: Re: [PATCH v2] hw/arm/virt enable support for virtio-mem
Date: Wed, 25 Nov 2020 14:56:59 +0000

On Tue, 24 Nov 2020 20:17:35 +0100
David Hildenbrand <david@redhat.com> wrote:

> On 24.11.20 19:11, Jonathan Cameron wrote:
> > On Mon, 9 Nov 2020 20:47:09 +0100
> > David Hildenbrand <david@redhat.com> wrote:
> > 
> > +CC Eric based on similar query in other branch of the thread.
> >   
> >> On 05.11.20 18:43, Jonathan Cameron wrote:  
> >>> Basically a cut and paste job from the x86 support with the exception of
> >>> needing a larger block size as the Memory Block Size (MIN_SECTION_SIZE)
> >>> on ARM64 in Linux is 1G.
> >>>
> >>> Tested:
> >>> * In full emulation and with KVM on an arm64 server.
> >>> * cold and hotplug for the virtio-mem-pci device.
> >>> * Wide range of memory sizes, added at creation and later.
> >>> * Fairly basic memory usage of memory added.  Seems to function as normal.
> >>> * NUMA setup with virtio-mem-pci devices on each node.
> >>> * Simple migration test.
> >>>
> >>> Related kernel patch just enables the Kconfig item for ARM64 as an
> >>> alternative to x86 in drivers/virtio/Kconfig
> >>>
> >>> The original patches from David Hildenbrand stated that he thought it 
> >>> should
> >>> work for ARM64 but it wasn't enabled in the kernel [1]
> >>> It appears he was correct and everything 'just works'.
> >>>
> >>> The build system related stuff is intended to ensure virtio-mem support is
> >>> not built for arm32 (build will fail due no defined block size).
> >>> If there is a more elegant way to do this, please point me in the right
> >>> direction.    
> >>
> >> You might be aware of https://virtio-mem.gitlab.io/developer-guide.html 
> >> and the "issue" with 64k base pages - 512MB granularity. Similar as the 
> >> question from Auger, have you tried running arm64 with differing page 
> >> sizes in host/guest?
> >>  
> > 
> > Hi David,
> >   
> >> With recent kernels, you can use "memhp_default_state=online_movable" on 
> >> the kernel cmdline to make memory unplug more likely to succeed - 
> >> especially with 64k base pages. You just have to be sure to not hotplug 
> >> "too much memory" to a VM.  
> > 
> > Thanks for the pointer - that definitely simplifies testing.  Was getting a 
> > bit
> > tedious with out that.
> > 
> > As ever other stuff got in the way, so I only just got back to looking at 
> > this.
> > 
> > I've not done a particularly comprehensive set of tests yet, but things seem
> > to 'work' with mixed page sizes.
> > 
> > With 64K pages in general, you run into a problem with the device 
> > block_size being
> > smaller than the subblock_size.  I've just added a check for that into the  
> 
> "device block size smaller than subblock size" - that's very common,
> e.g.,  on x86-64.
> 
> E.g., device_block_size is 2MiB, subblock size 4MiB - until we improve
> that in the future in Linux guests.

Ah. I'd missed that quirk around MAX_ORDER.  It's also true of ARM64 with
4k pages.  As you can probably guess I'd forgotten to recompile my 4k test
kernel after adding that particular check. :(

Ah well. Given we are already in a situation where adding 2MiB doesn't actually
do anything useful, I guess it's not really a problem to merrily let the host
add less than the guest can make use of.  So we allow adding any multiple of
2MiB but reality is the guest isn't going to use anything other than 512MiB
chunks.

> 
> Or did you mean something else?
> 
> > virtio-mem kernel driver and have it fail to probe if that happens.  I don't
> > think such a setup makes any sense anyway so no loss there.  Should it make 
> > sense
> > to drop that restriction in the future we can deal with that then without 
> > breaking
> > backwards compatibility.
> > 
> > So the question is whether it makes sense to bother with virtio-mem support
> > at all on ARM64 with 64k pages given currently the minimum workable 
> > block_size
> > is 512MiB?  I guess there is an argument of virtio-mem being a possibly more
> > convenient interface than full memory HP.  Curious to hear what people 
> > think on
> > this?  
> 
> IMHO we really want it. For example, RHEL is always 64k. This is a
> current guest limitation, to be improved in the future - either by
> moving away from 512MB huge pages with 64k or by improving
> alloc_contig_range().

Sure.  Would be great if we do one day support 2MiB on 64k.

> 
> > 
> > 4K guest on 64K host seems fine and no such limit is needed - though there
> > may be performance issues doing that.  
> 
> Yeah, if one is lucky to get one of these 512 MiB huge pages at all :)

Not too hard on my 1TB test system that's running nothing much else, but agreed 
it
won't be trivial more generally.

> 
> > 
> > 64k guest on 4k host with 512MiB block size seems fine.
> > 
> > If there are any places anyone thinks need particular poking I'd appreciate 
> > a hint :)  
> 
> If things seem to work for now, that's great :) Thanks!
> 
Cool.  I'll run a few more comprehensive tests then send out the
trivial patch to enable the kernel option + v2 of the qemu support.

Jonathan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]