qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-block] Some question about savem/qcow2 incrementa


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [Qemu-block] Some question about savem/qcow2 incremental snapshot
Date: Thu, 31 May 2018 11:48:38 +0100
User-agent: Mutt/1.9.5 (2018-04-13)

On Wed, May 30, 2018 at 06:07:19PM +0200, Kevin Wolf wrote:
> Am 30.05.2018 um 16:44 hat Stefan Hajnoczi geschrieben:
> > On Mon, May 14, 2018 at 02:48:47PM +0100, Stefan Hajnoczi wrote:
> > > On Fri, May 11, 2018 at 07:25:31PM +0200, Kevin Wolf wrote:
> > > > Am 10.05.2018 um 10:26 hat Stefan Hajnoczi geschrieben:
> > > > > On Wed, May 09, 2018 at 07:54:31PM +0200, Max Reitz wrote:
> > > > > > On 2018-05-09 12:16, Stefan Hajnoczi wrote:
> > > > > > > On Tue, May 08, 2018 at 05:03:09PM +0200, Kevin Wolf wrote:
> > > > > > >> Am 08.05.2018 um 16:41 hat Eric Blake geschrieben:
> > > > > > >>> On 12/25/2017 01:33 AM, He Junyan wrote:
> > > > > > >> I think it makes sense to invest some effort into such 
> > > > > > >> interfaces, but
> > > > > > >> be prepared for a long journey.
> > > > > > > 
> > > > > > > I like the suggestion but it needs to be followed up with a 
> > > > > > > concrete
> > > > > > > design that is feasible and fair for Junyan and others to 
> > > > > > > implement.
> > > > > > > Otherwise the "long journey" is really just a way of rejecting 
> > > > > > > this
> > > > > > > feature.
> > 
> > The discussion on NVDIMM via the block layer has runs its course.  It
> > would be a big project and I don't think it's fair to ask Junyan to
> > implement it.
> > 
> > My understanding is this patch series doesn't modify the qcow2 on-disk
> > file format.  Rather, it just uses existing qcow2 mechanisms and extends
> > live migration to identify the NVDIMM state state region to share the
> > clusters.
> > 
> > Since this feature does not involve qcow2 format changes and is just an
> > optimization (dirty blocks still need to be allocated), it can be
> > removed from QEMU in the future if a better alternative becomes
> > available.
> > 
> > Junyan: Can you rebase the series and send a new revision?
> > 
> > Kevin and Max: Does this sound alright?
> 
> Do patches exist? I've never seen any, so I thought this was just the
> early design stage.

Sorry for the confusion, the earlier patch series was here:

  https://lists.nongnu.org/archive/html/qemu-devel/2018-03/msg04530.html

> I suspect that while it wouldn't change the qcow2 on-disk format in a
> way that the qcow2 spec would have to be change, it does need to change
> the VMState format that is stored as a blob within the qcow2 file.
> At least, you need to store which other snapshot it is based upon so
> that you can actually resume a VM from the incremental state.
> 
> Once you modify the VMState format/the migration stream, removing it
> from QEMU again later means that you can't load your old snapshots any
> more. Doing that, even with the two-release deprecation period, would be
> quite nasty.
> 
> But you're right, depending on how the feature is implemented, it might
> not be a thing that affects qcow2 much, but one that the migration
> maintainers need to have a look at. I kind of suspect that it would
> actually touch both parts to a degree that it would need approval from
> both sides.

VMState wire format changes are minimal.  The only issue is that the
previous snapshot's nvdimm vmstate can start at an arbitrary offset in
the qcow2 cluster.  We can find a solution to the misalignment problem
(I think Junyan's patch series adds padding).

The approach references existing clusters in the previous snapshot's
vmstate area and only allocates new clusters for dirty NVDIMM regions.
In the non-qcow2 case we fall back to writing the entire NVDIMM
contents.

So instead of:

  write(qcow2_bs, all_vmstate_data); /* duplicates nvdimm contents :( */

do:

  write(bs, vmstate_data_upto_nvdimm);
  if (is_qcow2(bs)) {
      snapshot_clone_vmstate_range(bs, previous_snapshot,
                                   offset_to_nvdimm_vmstate);
      overwrite_nvdimm_dirty_blocks(bs, nvdimm);
  } else {
      write(bs, nvdimm_vmstate_data);
  }
  write(bs, vmstate_data_after_nvdimm);

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]