[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Loading snapshot with readonly qcow2 image

From: Michael Spradling
Subject: Re: [Qemu-devel] Loading snapshot with readonly qcow2 image
Date: Fri, 14 Dec 2018 16:03:34 +0000

On Dec 13 15:43, Eric Blake wrote:
> On 12/13/18 12:33 PM, Michael Spradling wrote:
> > > > My question is has anyone looked into loading snapshots from a backing
> > > > file?  I have attempted to look through the code and this looks to be
> > > > difficult.  If I attempt to add support for this is there any general
> > > > advice to follow?  Any other ideas?
> > > 
> > > 'qemu-nbd -l' can serve snapshots from a qcow2 file; perhaps that can be
> > > used to cobble together something that works for your needs?
> > > 
> > 
> > I looked at "qemu-nbd -l" and this seems to only export a readonly
> > interface.  Really, what I need is a writable temp file that can load a
> > snapshot snapshot.
> Can you combine -s (create a writable temp file) with -l to get what you
> want?
> /me tries:
> $ qemu-img create -f qcow2 a 1M
> Formatting 'a', fmt=qcow2 size=1048576 cluster_size=65536 lazy_refcounts=off
> refcount_bits=16
> $ qemu-io -c 'w -P 1 0 512' a
> wrote 512/512 bytes at offset 0
> 512 bytes, 1 ops; 0.0487 sec (10.257 KiB/sec and 20.5137 ops/sec)
> $ qemu-img snapshot -c snap a
> $ qemu-io -c 'w -P 2 0 512' a
> wrote 512/512 bytes at offset 0
> 512 bytes, 1 ops; 0.0752 sec (6.645 KiB/sec and 13.2903 ops/sec)
> $ qemu-nbd -l snap -s a
> Failed to load snapshot: Can't find snapshot
> I can confirm that 'qemu-nbd -s a' lets me write data that is discarded on
> disconnect (lsof says a temp file in /var/tmp/vl.XXXXXX was created); and
> that 'qemu-nbd -l snap a' lets me read the snapshot data. But mixing the two
> fails, and it would be a nice bug to fix.

I briefly looked at the code and is seams to be using the same base
functions as qemu does.  So, if I get this working for the model it
might also start working for qemu-nbd.

> > 
> > Please excuse and correct me if I get some of the terminology of the
> > sections below wrong.
> > 
> > I went down the path of hacking up some of the qemu qcow2 file system
> > code to see if I can achieve the ability to restore a snapshot from a
> > backing file to the temporarily created "-snapshot" qcow2 image.  The
> > backing file has been marked readonly by the filesystem and the active
> > image file was created with the "-snapshot" option.  I spend some time
> > reading the qcow2 documentation and it seems I have to copy the l1 and
> > l2 table values(are these actual host clusters) from the backing file
> > snapshot to the active images l1 and l2 tables.  Is there anything else
> > that may need updated that I have not yet stumbled upon?
> Mucking with the l1 and l2 tables implies that you are directly manipulating
> qcow2 contents.  It's much nicer if you can come up with a solution where
> qemu-img does all the qcow2 work for you, and you just worry about
> guest-visible data.  Or are you actually patching the code compiled into
> qemu-img?
Ideally, I want to not modify old images or create new images with
qemu-img, so I have been not modifing qemu-img, but qemu directly
itself.  My use case will have several snapshots in an image.(say
100).  I will then later resume each of these snapshots in a qemu
session in parallel.  This is why I have gone done the route of modifying
the temp snapshots file /var/tmp/vl.XXXXX L1 and l2 tables.  My
understanding is if these are updated and the cluster doesn't exists in
the temp file the code will then look for it in the backing file.  Still
researching this area.

> > 
> > I still don't have this working yet and I believe my area of problems is
> > qcow2_update_snapshot_refcount.  Can anyone explain what this does
> > exactly.  It seems the function does three different things based on the
> > value of addend, either -1, 0, 1, but its somewhat unclear.
> Every cluster of qcow2 is reference-counted, to track which portions of the
> file are (supposed to be) in use according to following the metadata trails.
> When internal snapshots are used, this is implemented by incrementing the
> refcount for each cluster that is reachable both from the snapshot and from
> the current L1 table (update_snapshot_refcount +1), then when writing to the
> cluster we break the reference count by writing the new data to a new
> allocation and decrementing the reference count of the old cluster. When
> trimming clusters, we decrement the refcount, and if it goes to 0 the
> cluster can be reused for something else.

I think I understand this.  That would satifys addend being a -1 or 1.
I am still unclear why you would call the fuction with addend being 0.
> -- 
> Eric Blake, Principal Software Engineer
> Red Hat, Inc.           +1-919-301-3266
> Virtualization:  qemu.org | libvirt.org

Thanks for your help so far.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]