qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v0 0/4] background snapshot


From: Denis Plotnikov
Subject: Re: [PATCH v0 0/4] background snapshot
Date: Thu, 23 Jul 2020 11:03:55 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0



On 22.07.2020 19:30, Peter Xu wrote:
On Wed, Jul 22, 2020 at 06:47:44PM +0300, Denis Plotnikov wrote:

On 22.07.2020 18:42, Denis Plotnikov wrote:

On 22.07.2020 17:50, Peter Xu wrote:
Hi, Denis,
Hi, Peter
...
How to use:
1. enable background snapshot capability
     virsh qemu-monitor-command vm --hmp migrate_set_capability
background-snapshot on

2. stop the vm
     virsh qemu-monitor-command vm --hmp stop

3. Start the external migration to a file
     virsh qemu-monitor-command cent78-bs --hmp migrate exec:'cat
./vm_state'
4. Wait for the migration finish and check that the migration
has completed state.
Thanks for continued working on this project! I have two high level
questions
before dig into the patches.

Firstly, is step 2 required?  Can we use a single QMP command to
take snapshots
(which can still be a "migrate" command)?
With this series it is required, but steps 2 and 3 should be merged into
a single one.
I'm not sure whether you're talking about the disk snapshot operations, anyway
yeah it'll be definitely good if we merge them into one in the next version.

After thinking for a while, I remembered why I split these two steps.
The vm snapshot consists of two parts: disk(s) snapshot(s) and vmstate.
With migrate command we save the vmstate only. So, the steps to save
the whole vm snapshot is the following:

2. stop the vm
    virsh qemu-monitor-command vm --hmp stop

2.1. Make a disk snapshot, something like
    virsh qemu-monitor-command vm --hmp snapshot_blkdev drive-scsi0-0-0-0 
./new_data
3. Start the external migration to a file
    virsh qemu-monitor-command vm --hmp migrate exec:'cat ./vm_state'

In this example, vm snapshot consists of two files: vm_state and the disk file. 
new_data will contain all new disk data written since [2.1.] executing.


Meanwhile, we might also want to check around the type of backend
RAM.  E.g.,
shmem and hugetlbfs are still not supported for uffd-wp (which I'm still
working on).  I didn't check explicitly whether we'll simply fail
the migration
for those cases since the uffd ioctls will fail for those kinds of
RAM.  It
would be okay if we handle all the ioctl failures gracefully,
The ioctl's result is processed but the patch has a flaw - it ignores
the result of ioctl check. Need to fix it.
It happens here:

+int ram_write_tracking_start(void)
+{
+    if (page_fault_thread_start()) {
+        return -1;
+    }
+
+    ram_block_list_create();
+    ram_block_list_set_readonly(); << this returns -1 in case of failure but I 
just ignore it here
+
+    return 0;
+}

or it would be
even better if we directly fail when we want to enable live snapshot
capability
for a guest who contains other types of ram besides private
anonymous memories.
I agree, but to know whether shmem or hugetlbfs are supported by the
current kernel we should
execute the ioctl for all memory regions on the capability enabling.
Yes, that seems to be a better solution, so we don't care about the type of ram
backend anymore but check directly with the uffd ioctls.  With these checks,
it'll be even fine to ignore the above retcode, or just assert, because we've
already checked that before that point.

Thanks,





reply via email to

[Prev in Thread] Current Thread [Next in Thread]