qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

qmp blockdev-backup online - do I have this right?


From: John Maline
Subject: qmp blockdev-backup online - do I have this right?
Date: Sun, 15 Jan 2023 18:01:11 -0600

For my hobby VM I’m looking to switch to qemu from a commercial hypervisor. I wanted to check my understanding of doing backups. I expect a storage-level point-in-time “snapshot” like produced by blockdev-backup meets my needs. The guest OS logging filesystem should come up in a consistent enough state for my needs if I ever need to use a backup.

Environment - macos 12.6 on arm processor, guest is aarch64 centos linux using hvf accelerator. 

I’ve read a bit and experimented with blockdev-backup via qmp. The examples I’ve seen in qemu docs confuse me, emphasizing the importance of shutting down the VM at backup completion. Seems counter to the whole point (as I understand it) of using these online capabilities like blockdev-backup. https://qemu.readthedocs.io/en/latest/interop/live-block-operations.html#live-disk-backup-blockdev-backup-and-the-deprecated-drive-backup

I adjusted the example to finish with a blockdev-del (after seeing the job lifecycle complete) instead of VM shutdown. It produced a dup of the storage good enough to bring up the VM on so seems like success. Is this a reliable way to flush / close / clean up the backup file? If I do multiple times on a running VM, should all work as expected (e.g., daily on a VM that might restart weekly to take patches).

The sequence of qmp commands I’ve done is as follows.

{"execute": "qmp_capabilities"}

{"execute":"blockdev-add",
 "arguments":{"node-name":"backup-node", "driver":"qcow2", "file":{"driver":"file", "filename":"backups/backup1.img"}}
}

{"execute":"blockdev-backup",
 "arguments":{"device":"drive0", "job-id":"job0", "target":"backup-node", "sync":"full"}
}

... watch many job state change events ...

{"execute":"blockdev-del",
 "arguments": {"node-name":"backup-node"}
}

Thanks for any guidance or confirmation.

John


reply via email to

[Prev in Thread] Current Thread [Next in Thread]