qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 15/20] iotests: 219: prepare for backup over block-copy


From: Max Reitz
Subject: Re: [PATCH v2 15/20] iotests: 219: prepare for backup over block-copy
Date: Thu, 23 Jul 2020 10:35:17 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0

On 01.06.20 20:11, Vladimir Sementsov-Ogievskiy wrote:
> The further change of moving backup to be a on block-copy call will

-on?

> make copying chunk-size and cluster-size a separate things. So, even

s/a/two/

> with 64k cluster sized qcow2 image, default chunk would be 1M.
> Test 219 depends on specified chunk-size. Update it for explicit
> chunk-size for backup as for mirror.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  tests/qemu-iotests/219 | 13 +++++++------
>  1 file changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/tests/qemu-iotests/219 b/tests/qemu-iotests/219
> index db272c5249..2bbed28f39 100755
> --- a/tests/qemu-iotests/219
> +++ b/tests/qemu-iotests/219
> @@ -203,13 +203,13 @@ with iotests.FilePath('disk.img') as disk_path, \
>      # but related to this also automatic state transitions like job
>      # completion), but still get pause points often enough to avoid making 
> this
>      # test very slow, it's important to have the right ratio between speed 
> and
> -    # buf_size.
> +    # copy-chunk-size.
>      #
> -    # For backup, buf_size is hard-coded to the source image cluster size 
> (64k),
> -    # so we'll pick the same for mirror. The slice time, i.e. the granularity
> -    # of the rate limiting is 100ms. With a speed of 256k per second, we can
> -    # get four pause points per second. This gives us 250ms per iteration,
> -    # which should be enough to stay deterministic.
> +    # Chose 64k copy-chunk-size both for mirror (by buf_size) and backup (by
> +    # x-max-chunk). The slice time, i.e. the granularity of the rate limiting
> +    # is 100ms. With a speed of 256k per second, we can get four pause points
> +    # per second. This gives us 250ms per iteration, which should be enough 
> to
> +    # stay deterministic.

Don’t we also have to limit the number of workers to 1 so we actually
keep 250 ms per iteration instead of just finishing four requests
immediately, then pausing for a second?

>      test_job_lifecycle(vm, 'drive-mirror', has_ready=True, job_args={
>          'device': 'drive0-node',
> @@ -226,6 +226,7 @@ with iotests.FilePath('disk.img') as disk_path, \
>                  'target': copy_path,
>                  'sync': 'full',
>                  'speed': 262144,
> +                'x-max-chunk': 65536,
>                  'auto-finalize': auto_finalize,
>                  'auto-dismiss': auto_dismiss,
>              })
> 


Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]