[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 0/8] block: drive-backup live backup command

From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH v3 0/8] block: drive-backup live backup command
Date: Thu, 16 May 2013 09:47:46 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Thu, May 16, 2013 at 02:16:20PM +0800, Wenchao Xia wrote:
>   After checking the code, I found it possible to add delta data backup
> support also, If an additional dirty bitmap was added.

I've been thinking about this.  Incremental backups need to know which
blocks have changed, but keeping a persistent dirty bitmap is expensive
and unnecessary.

Backup applications need to support the full backup case anyway for
their first run.  Therefore we can keep a best-effort dirty bitmap which
is persisted only when the guest is terminated cleanly.  If the QEMU
process crashes then the on-disk dirty bitmap will be invalid and the
backup application needs to do a full backup next time.

The advantage of this approach is that we don't need to fdatasync(2)
before every guest write operation.

> Compared with
> current solution, I think it is doing COW at qemu device level:
>     qemu device
>         |
> general block layer
>         |
> virtual format layer
>         |
> -----------------------
> |                     |
> qcow2             vmdk....
>   This will make things complicated when more works comes, a better
> place for block COW, is under general block layer. Maybe later we
> can adjust block for it.

I don't consider block jobs to be "qemu device" layer.  It sounds like
you think the code should be in bdrv_co_do_writev()?

The drive-backup operation doesn't really affect the source
BlockDriverState, it just needs to intercept writes.  Therefore it seems
cleaner for the code to live separately (plus we reuse the code for the
block job loop which copies out data while the guest is running).
Otherwise we would squash all of the blockjob code into block.c and it
would be an even bigger mess than it is today :-).

reply via email to

[Prev in Thread] Current Thread [Next in Thread]