[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [Qemu-block] Some question about savem/qcow2 incrementa
From: |
Stefan Hajnoczi |
Subject: |
Re: [Qemu-devel] [Qemu-block] Some question about savem/qcow2 incremental snapshot |
Date: |
Fri, 8 Jun 2018 14:29:27 +0100 |
User-agent: |
Mutt/1.9.5 (2018-04-13) |
On Fri, Jun 08, 2018 at 05:02:58AM +0000, He, Junyan wrote:
> I use the simple way to handle this,
> 1. Separate the nvdimm region from ram when do snapshot.
> 2. If the first time, we dump all the nvdimm data the same as ram, and enable
> dirty log trace
> for nvdimm kind region.
> 3. If not the first time, we find the previous snapshot point and add
> reference to its clusters
> which is used to store nvdimm data. And this time, we just save dirty page
> bitmap and dirty pages.
> Because the previous nvdimm data clusters is ref added, we do not need to
> worry about its deleting.
>
> I encounter a lot of problems:
> 1. Migration and snapshot logic is mixed and need to separate them for nvdimm.
> 2. Cluster has its alignment. When do snapshot, we just save data to disk
> continuous. Because we
> need to add ref to cluster, we really need to consider the alignment. I just
> use a little trick way
> to padding some data to alignment now, and I think it is not a good way.
> 3. Dirty log trace may have some performance problem.
>
> In theory, this manner can be used to handle all kind of huge memory
> snapshot, we need to find the
> balance between guest performance(Because of dirty log trace) and snapshot
> saving time.
If the snapshots are placed on the NVDIMM then save/load times should be
shorter. I'm not sure how practical that is since this approach may be
too expensive for users.
Stefan
signature.asc
Description: PGP signature