[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Qemu and Changed Block Tracking

From: John Snow
Subject: Re: [Qemu-devel] Qemu and Changed Block Tracking
Date: Fri, 24 Feb 2017 16:31:42 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.0

On 02/23/2017 09:27 AM, Peter Lieven wrote:
> Am 22.02.2017 um 13:32 schrieb Eric Blake:
>> On 02/22/2017 02:45 AM, Peter Lieven wrote:
>>>> A bit outdated now, but:
>>>> http://wiki.qemu-project.org/Features/IncrementalBackup
>>>> and also a summary I wrote not too far back (PDF):
>>>> https://drive.google.com/file/d/0B3CFr1TuHydWalVJaEdPaE5PbFE
>>>> and I'm sure the Virtuozzo developers could chime in on this subject,
>>>> but basically we do have something similar in the works, as eblake
>>>> says.
>>> Hi John, Hi Erik,
>> It's Eric, but you're not the first to make that typo :)
>>> thanks for your feedback. Are you both the ones working primary on
>>> this topic?
>>> If there is anything to review or help needed, please let me know.
>>> My 2 cents:
>>> I thing I had in mind if there is no image fleecing available, but
>>> fetching the dirty bitmap
>>> from external would be a feauture to put a write lock on a block device.
>> The whole idea is to use a dirty bitmap coupled with image fleecing,
>> where the point-in-time of the image fleecing is done at a window where
>> the guest I/O is quiescent in order to get a stable fleecing point.  We
>> already support write locks (guest quiesence) using qga to do fsfreeze.
>> You want the time that guest I/O is frozen to be as small as possible
>> (in particular, the Windows implementation of quiescence will fail if
>> you hold things frozen for more than a couple of seconds).
>> Right now, the qcow2 image format does not track write generations, and
>> I don't think we plan on adding that directly into qcow2.  However, you
>> can externally simulate write generations by keeping track of how many
>> image fleecing points you have created (each fleecing point is another
>> write generation).
>>> In this case something like this via QMP (and external software)
>>> should work:
>>> ---8<---
>>>   gen =  write generation of last backup (or 0 for full backup)
>>>   do {
>>>       nextgen = fetch current write generation (via QMP)
>>>       dirtymap = send all block whose write generation is greater
>>> than 'gen' (via QMP)
>> No, we are NOT going to send dirty information via QMP.  Rather, we are
>> going to send it via NBD's extension NBD_CMD_BLOCK_STATUS.  The idea is
>> that a client connects and asks which qemu blocks are dirty, then uses
>> that information to read only the dirty blocks.
> I understand, that for the case of local storage connecting via NBD to
> Qemu to grep a snapshot
> might be a good idea, but consider that you have a NAS for your vServer
> images. May it be NFS,
> iSCSI, CEPH or whatever. In an enterprise scenario I would generally
> except to have a NAS rather
> than local storage.
> When you are going to backup your vServer (full or incremental) you
> shuffle all the traffic through
> Qemu and your Node running the vServer. In this case you run all the
> traffic over the wire twice.
> NAS -> Node -> Qemu - > Backup Server
> But the Backup Server could instead connect to the NAS directly avoiding
> load on the frontent LAN
> and the Qemu Node.

In a live backup I don't see how you will be removing QEMU from the data
transfer loop. QEMU is the only process that knows what the correct view
of the image is, and needs to facilitate.

It's not safe to copy the blocks directly without QEMU's mediation.


> I would like to find a nice solution for this scenario. If not in the
> first step it would maybe be good to
> have this in mind when implementing a dirty block tracking.
> Peter

reply via email to

[Prev in Thread] Current Thread [Next in Thread]