qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 0/4] apic: Fix migration breakage of >255 vcpus


From: Peter Xu
Subject: Re: [PATCH v2 0/4] apic: Fix migration breakage of >255 vcpus
Date: Wed, 23 Oct 2019 18:39:17 +0800
User-agent: Mutt/1.11.4 (2019-03-13)

On Sat, Oct 19, 2019 at 11:41:53AM +0800, Peter Xu wrote:
> On Wed, Oct 16, 2019 at 11:40:01AM -0300, Eduardo Habkost wrote:
> > On Wed, Oct 16, 2019 at 10:29:29AM +0800, Peter Xu wrote:
> > > v2:
> > > - use uint32_t rather than int64_t [Juan]
> > > - one more patch (patch 4) to check dup SaveStateEntry [Dave]
> > > - one more patch to define a macro (patch 1) to simplify patch 2
> > > 
> > > Please review, thanks.
> > 
> > I wonder how hard it is to write a simple test case to reproduce
> > the original bug.  We can extend tests/migration-test.c or
> > tests/acceptance/migration.py.  If using -device with explicit
> > apic-id, we probably don't even need to create >255 VCPUs.
> 
> I can give it a shot next week. :)

When I was playing with it, I noticed that it's not that easy at least
for the migration-test.  We need to do these:

- add one specific CPU with apic-id>255, this is easy by using
  "-device qemu64-x86_64-cpu,..."

- enable x2apic in the guest code, read apic-id on the special vcpu to
  make sure it's correct even after migration, but before that...

- I failed to find a way to use apic-id>255 as the BSP of system but I
  can only create APs with apic-id>255, so we need to add initial MP
  support for the migration guest code, then run that apic-id check
  code on the new AP

- I also probably found that q35 bug on bootstraping the 512B disk, so
  we probably need to workaround that too until fixed

Unless someone has better idea on this, I'll simply stop here because
I'm afraid it does not worth the effort so far... (or until we have
some other requirement to enrich the migration qtest framework)

-- 
Peter Xu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]