[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/7] qcow2: compressed write cache

From: Vladimir Sementsov-Ogievskiy
Subject: Re: [PATCH 0/7] qcow2: compressed write cache
Date: Wed, 10 Feb 2021 17:35:58 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.0

10.02.2021 15:35, Kevin Wolf wrote:
Am 29.01.2021 um 17:50 hat Vladimir Sementsov-Ogievskiy geschrieben:
Hi all!

I know, I have several series waiting for a resend, but I had to switch
to another task spawned from our customer's bug.

Original problem: we use O_DIRECT for all vm images in our product, it's
the policy. The only exclusion is backup target qcow2 image for
compressed backup, because compressed backup is extremely slow with
O_DIRECT (due to unaligned writes). Customer complains that backup
produces a lot of pagecache.

So we can either implement some internal cache or use fadvise somehow.
Backup has several async workes, which writes simultaneously, so in both
ways we have to track host cluster filling (before dropping the cache
corresponding to the cluster).  So, if we have to track anyway, let's
try to implement the cache.

Idea is simple: cache small unaligned write and flush the cluster when

I haven't had the time to properly look at the patches, but is there
anything in it that is actually specific to compressed writes?

I'm asking because you may remember that a few years ago I talked at KVM
Forum about how a data cache could be used for small unaligned (to
cluster sizes) writes to reduce COW cost (mostly for sequential access
where the other part of the cluster would be filled soon enough).

So if we're introducing some kind of data cache, wouldn't it be nice to
use it even in the more general case instead of just restricting it to

Specific things are:

 - setting data_end per cluster at some moment (so we flush the cluster when it 
is not full) In this case we align up the data_end, as we know that the 
remaining part of cluster is unused. But, that may be refactored as an option.
 - wait for the whole cluster filled

So it can be reused for some sequential (more or less) copying process with 
unaligned chunks.. But different copying jobs in qemu always have aligned 
chunks, the only exclusion is copying to compressed target..

Still I intentionally implemented it in a separate file, and there no use of 
BDRVQcow2State, so it's simple enough to refactor and reuse if needed.

I can rename it to "unaligned_copy_cache" or something like this.

Best regards,

reply via email to

[Prev in Thread] Current Thread [Next in Thread]