[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH 1/2] Add save-snapshot, load-snapsh

From: Daniel P . Berrangé
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH 1/2] Add save-snapshot, load-snapshot and delete-snapshot to QAPI
Date: Tue, 13 Feb 2018 14:36:15 +0000
User-agent: Mutt/1.9.1 (2017-09-22)

On Tue, Feb 13, 2018 at 05:30:02PM +0300, Roman Kagan wrote:
> On Tue, Feb 13, 2018 at 11:50:24AM +0100, Kevin Wolf wrote:
> > Am 11.01.2018 um 14:04 hat Daniel P. Berrange geschrieben:
> > > Then you could just use the regular migrate QMP commands for loading
> > > and saving snapshots.
> > 
> > Yes, you could. I think for a proper implementation you would want to do
> > better, though. Live migration provides just a stream, but that's not
> > really well suited for snapshots. When a RAM page is dirtied, you just
> > want to overwrite the old version of it in a snapshot [...]
> This means the point in time where the guest state is snapshotted is not
> when the command is issued, but any unpredictable amount of time later.
> I'm not sure this is what a user expects.
> A better approach for the save part appears to be to stop the vcpus,
> dump the device state, resume the vcpus, and save the memory contents in
> the background, prioritizing the old copies of the pages that change.
> No multiple copies of the same page would have to be saved so the stream
> format would be fine.  For the load part the usual inmigrate should
> work.

No, that's policy decision that doesn't matter from QMP pov. If the mgmt
app wants the snapshot to be wrt to the initial time, it can simply
invoke the "stop" QMP command before doing the live migration and
"cont" afterwards.

|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

reply via email to

[Prev in Thread] Current Thread [Next in Thread]