|
From: | Vladimir Sementsov-Ogievskiy |
Subject: | Re: [PATCH 2/2] qcow2: improve savevm performance |
Date: | Thu, 11 Jun 2020 11:44:27 +0300 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.8.1 |
11.06.2020 11:25, Denis V. Lunev wrote:
On 6/11/20 11:04 AM, Vladimir Sementsov-Ogievskiy wrote:10.06.2020 22:00, Denis V. Lunev wrote:This patch does 2 standard basic things: - it creates intermediate buffer for all writes from QEMU migration code to QCOW2 image, - this buffer is sent to disk asynchronously, allowing several writes to run in parallel. In general, migration code is fantastically inefficent (by observation), buffers are not aligned and sent with arbitrary pieces, a lot of time less than 100 bytes at a chunk, which results in read-modify-write operations with non-cached operations. It should also be noted that all operations are performed into unallocated image blocks, which also suffer due to partial writes to such new clusters. Snapshot creation time (2 GB Fedora-31 VM running over NVME storage): original fixed cached: 1.79s 1.27s non-cached: 3.29s 0.81s The difference over HDD would be more significant :) Signed-off-by: Denis V. Lunev <den@openvz.org>If I follow correctly, you make qcow2_save_vmstate implicitly asynchronous: it may return immediately after creating a task, and task is executing in parallel. I think, block-layer is unprepared for such behavior, it rely on the fact that .bdrv_save_vmstate is synchronous. For example, look at bdrv_co_rw_vmstate(). It calls drv->bdrv_save_vmstate inside pair of bdrv_inc_in_flight()/bdrv_dec_in_flight(). It means that with this patch, we may break drained section.Drained sections are not involved into the equation - the guest is stopped at the moment. If we are talking about in_flight count, it should not be a problem even if the guest is running. We could just put inc/dec into qcow2_co_vmstate_task_entry().
No, we at least should do "inc" in qcow2_save_vmstate, when we are inside original generic inc/dec. Not a big deal still.
Next, it's a kind of cache for vmstate-write operation. It seems for me that it's not directly related to qcow2. So, we can implement it in generic block layer, where we can handle in_fligth requests. Can we keep .bdrv_save_vmstate handlers of format drivers as is, keep them synchronous, but instead change generic interface to be (optionally?) cached?This has been discussed already in the previous thread. .bdrv_save_vmstate is implemented in QCOW2 and sheepdog only. Thus other formats are mostly non-interested.
Hmm, yes, sorry, I forget. It seems that sheepdoc project is dead. Who knows, is it correct? OK, it's probably OK to make the interface optianally-async, if we add in-flight request correctly but at least we should document it around .bdrv_save_vmstate handler declaration. -- Best regards, Vladimir
[Prev in Thread] | Current Thread | [Next in Thread] |