qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-ppc] [PATCH 1/1] spapr: Do not re-read the clock


From: Maxiwell S. Garcia
Subject: Re: [Qemu-devel] [Qemu-ppc] [PATCH 1/1] spapr: Do not re-read the clock on pre_save handler on migration
Date: Wed, 5 Jun 2019 16:39:27 -0300
User-agent: NeoMutt/20180716

On Thu, May 30, 2019 at 11:13:41AM +1000, David Gibson wrote:
> On Thu, May 23, 2019 at 05:18:51PM -0300, Maxiwell S. Garcia wrote:
> > On Thu, May 23, 2019 at 09:29:52AM +1000, David Gibson wrote:
> > > On Mon, May 20, 2019 at 05:43:40PM -0300, Maxiwell S. Garcia wrote:
> > > > This handler was added in the commit:
> > > >   42043e4f1241: spapr: clock should count only if vm is running
> > > > 
> > > > In a scenario without migration, this pre_save handler is not
> > > > triggered, so the 'stop/cont' commands save and restore the clock
> > > > in the function 'cpu_ppc_clock_vm_state_change.' The SW clock
> > > > in the guest doesn't know about this pause.
> > > > 
> > > > If the command 'migrate' is called between 'stop' and 'cont',
> > > > the pre_save handler re-read the clock, and the SW clock in the
> > > > guest will know about the pause between 'stop' and 'migrate.'
> > > > If the guest is running a workload like HTC, a side-effect of
> > > > this is a lot of process stall messages (with call traces) in
> > > > the kernel guest.
> > > > 
> > > > Signed-off-by: Maxiwell S. Garcia <address@hidden>
> > > 
> > > What affect will this have on the clock for the case of migrations
> > > without a stop/cont around?
> > 
> > The guest timebase is saved when the VM stop running and restored when
> > the VM starts running again (cpu_ppc_clock_vm_state_change handler).
> > Migrations without stop/cont save the clock when the VM go to the
> > FINISH_MIGRATE state.
> 
> Right... which means the clock is effectively stopped for the
> migration downtime window while we transfer the final state.  That
> means the guest clock will drift from wall clock by a couple of
> hundred ms across the migration which is not correct.
> 
> > > The complicated thing here is that for
> > > *explicit* stops/continues we want to freeze the clock, however for
> > > the implicit stop/continue during migration downtime, we want to keep
> > > the clock running (logically), so that the guest time of day doesn't
> > > get out of sync on migration.
> > > 
> > 
> > Not sure if the *implicit* word here is about commands from the libvirt
> > or any other orchestrator.
> 
> By implicit I mean the stopping of the VM which qemu does to transfer
> the final part of the state, rather than because of an explicit
> stop/cont command.
> 
> > QEMU itself doesn't know the intent behind the
> > command stop/cont. So, If we are using a guest to process a workload and
> > the manager tool decide to migrate our VM transparently, it's unpleasant
> > to see a lot of process stalls with call traces in the kernel log.
> 
> If you have a lot of process stalls across a migration, that suggests
> your permitted downtime window is *way* too long.
> 

I see a difference between live migration and 'cold' migration. In
a cold migration scenario (where a user 'stop' the machine, wait an
arbitrary time, move it to another server, and run it), the behavior
should be the same as executing 'stop/cont' in a guest without migration.

This problem also emerges when the 'timeout' flag is used with 'virsh'
tool to live-migrate a guest. After the 'timeout', the libvirt sends a
'stop' command to QEMU to suspend the guest before migrate. If the NFS
is slow, for example, this guest will wait many minutes to run again.

Maybe a solution is to modify the timebase_pre_save handler to know the
current vm_state and only save the timebase again when vm_state is not
in 'stop' state. What do you think?


> > The high-level tools could sync the SW clock with the HW clock if this
> > behavior is required, keeping the QEMU stop/cont and stop/migrate/cont
> > consistent.
> > 
> > > > ---
> > > >  hw/ppc/ppc.c | 24 ------------------------
> > > >  1 file changed, 24 deletions(-)
> > > > 
> > > > diff --git a/hw/ppc/ppc.c b/hw/ppc/ppc.c
> > > > index ad20584f26..3fb50cbeee 100644
> > > > --- a/hw/ppc/ppc.c
> > > > +++ b/hw/ppc/ppc.c
> > > > @@ -1056,35 +1056,11 @@ void cpu_ppc_clock_vm_state_change(void 
> > > > *opaque, int running,
> > > >      }
> > > >  }
> > > >  
> > > > -/*
> > > > - * When migrating, read the clock just before migration,
> > > > - * so that the guest clock counts during the events
> > > > - * between:
> > > > - *
> > > > - *  * vm_stop()
> > > > - *  *
> > > > - *  * pre_save()
> > > > - *
> > > > - *  This reduces clock difference on migration from 5s
> > > > - *  to 0.1s (when max_downtime == 5s), because sending the
> > > > - *  final pages of memory (which happens between vm_stop()
> > > > - *  and pre_save()) takes max_downtime.
> > > 
> > > Urgh.. this comment is confusing - 5s would be a ludicrously long
> > > max_downtime by modern standards.
> > > 
> > > > - */
> > > > -static int timebase_pre_save(void *opaque)
> > > > -{
> > > > -    PPCTimebase *tb = opaque;
> > > > -
> > > > -    timebase_save(tb);
> > > > -
> > > > -    return 0;
> > > > -}
> > > > -
> > > >  const VMStateDescription vmstate_ppc_timebase = {
> > > >      .name = "timebase",
> > > >      .version_id = 1,
> > > >      .minimum_version_id = 1,
> > > >      .minimum_version_id_old = 1,
> > > > -    .pre_save = timebase_pre_save,
> > > >      .fields      = (VMStateField []) {
> > > >          VMSTATE_UINT64(guest_timebase, PPCTimebase),
> > > >          VMSTATE_INT64(time_of_the_day_ns, PPCTimebase),
> > > 
> > 
> > 
> 
> -- 
> David Gibson                  | I'll have my music baroque, and my code
> david AT gibson.dropbear.id.au        | minimalist, thank you.  NOT _the_ 
> _other_
>                               | _way_ _around_!
> http://www.ozlabs.org/~dgibson





reply via email to

[Prev in Thread] Current Thread [Next in Thread]