qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] vl.c: make sure maxcpus matches topology to pre


From: Andrew Jones
Subject: Re: [Qemu-devel] [PATCH] vl.c: make sure maxcpus matches topology to prevent migration failure
Date: Mon, 27 Aug 2018 13:21:52 +0200
User-agent: NeoMutt/20180716

On Thu, Aug 23, 2018 at 03:03:07PM -0300, Eduardo Habkost wrote:
> On Thu, Aug 23, 2018 at 06:32:41PM +0200, Paolo Bonzini wrote:
> > On 23/08/2018 16:51, Igor Mammedov wrote:
> > > Topology (threads*cores*sockets) must match maxcpus to be valid,
> > > otherwise we could start QEMU with invalid topology that throws
> > > a error on migration destination side, that should not be reachable:
> > > Source:
> > >   -smp 8,maxcpus=64,cores=1,threads=8,sockets=1
> > > // hotplug cpus upto maxcpus
> > > Destination:
> > >   -smp 64,maxcpus=64,cores=1,threads=8,sockets=1
> > >   qemu: cpu topology: sockets (1) * cores (1) * threads (8) < smp_cpus 
> > > (64)
> > 
> > The destination should have sockets=8, shouldn't it?
> > 
> > It seems to me that, at startup, you should have cpus = s*t*c and cpus
> > <= maxcpus.  Currently we check cpus <= s*t*c <= maxcpus, which doesn't
> > make much sense.
> 
> Most of the incompleteness of input validation at smp_parse() can
> be explained by our fear of breaking existing configurations and
> making existing running VMs not runnable.
> 
> But now we have a deprecation policy.  If we're still afraid of
> breaking peoples' existing configurations, we should at least
> deprecate those configurations as soon as possible (and make QEMU
> at least emit a warning).
>

A million years ago (well > 2 anyway) when I was thinking about doing some
'-smp' improvements I tried to address this without breaking existing
configs. Here's how I approached it

 https://lists.gnu.org/archive/html/qemu-ppc/2016-06/msg00317.html

Also, there's another similar fix needed in smbios generation. See

 https://lists.gnu.org/archive/html/qemu-ppc/2016-06/msg00322.html

Thanks,
drew 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]