qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: cluster_size got from backup_calculate_cluster_size()


From: Vladimir Sementsov-Ogievskiy
Subject: Re: cluster_size got from backup_calculate_cluster_size()
Date: Thu, 21 May 2020 23:53:12 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0

21.05.2020 21:19, John Snow wrote:


On 5/21/20 5:56 AM, Derek Su wrote:
Hi,

The cluster_size got from backup_calculate_cluster_size(),
MAX(BACKUP_CLUSTER_SIZE_DEFAULT, bdi.cluster_size), is 64K regardless
of the target image's cluster size.

Not regardless -- but it is using 64K as a minimum.



For example:

If the cluster size of source and target qcow2 images are both 16K, the
64K from backup_calculate_cluster_size() results in unwanted copies of
clusters.

The behavior slows down the backup (block-copy) process when the
source image receives lots of rand writes.

Are we talking about incremental, full, or top?



Is the following calculation reasonable for the above issue?


```
static int64_t backup_calculate_cluster_size(BlockDriverState *target,
                                              Error **errp)
{
     ...

     ret = bdrv_get_info(target, &bdi);

     ...

     return (bdi.cluster_size == 0 ?
                 BACKUP_CLUSTER_SIZE_DEFAULT : cluster_size);
}

```

Thanks.
Regards,

Derek



Might be reasonable. When I made this the "default", it actually used to
just be "hardcoded." We could continue in this direction and go all the
way and turn it into a tune-able.

I didn't think to make it lower than 64K because I was thinking about
the linear, full backup case where cluster sizes that were too small
might slow down the loop with too much metadata.

Yes, currently backup-loop is copying cluster-by-cluster, so if we allow 
clusters less than 64k, it may become slower (at least for full-backup). Still, 
my work about backup-performance is close to its end, and I hope, in 5.1 will 
be merged. One of effects is that backup copying loop may copy more than a 
cluster at once, so this problem will gone.

Keeping this in mind, I think we can allow smaller clusters now, as anyway, 
small clusters is a rare special case.


(And the default qcow2 is 64K, so it seemed like a good choice at the time.)

We could create a default-cluster-size tunable, perhaps. It's at 64K
now, but perhaps you can override it down to 16 or lower. We could
possibly treat a value of 0 as "no minimum; use the smallest common size."

This is a separate issue entirely, but we may also wish to begin
offering a cluster-size override directly. In the case that we know this
value is unsafe, we reject the command. In the case that it's ambiguous
due to lack of info, we can defer to the user's judgment. This would
allow us to force the backup to run in cases where we presently abort
out of fear.

CCing Vladimir who has been working on the backup loop extensively.

--js



--
Best regards,
Vladimir



reply via email to

[Prev in Thread] Current Thread [Next in Thread]