qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ovirt-users] Re: Any way to terminate stuck export task


From: Nir Soffer
Subject: Re: [ovirt-users] Re: Any way to terminate stuck export task
Date: Tue, 6 Jul 2021 18:44:04 +0300

On Tue, Jul 6, 2021 at 5:55 PM Gianluca Cecchi
<gianluca.cecchi@gmail.com> wrote:
>
> On Tue, Jul 6, 2021 at 2:52 PM Nir Soffer <nsoffer@redhat.com> wrote:
>
>>
>>
>> Too bad.
>>
>> You can evaluate how ovirt 4.4. will work with this appliance using
>> this dd command:
>>
>>     dd if=/dev/zero bs=8M count=38400 of=/path/to/new/disk
>> oflag=direct conv=fsync
>>
>> We don't use dd for this, but the operation is the same on NFS < 4.2.
>>
>
> I confirm I'm able to saturate the 1Gb/s link. tried creating a 10Gb file on 
> the StoreOnce appliance
>  # time dd if=/dev/zero bs=8M count=1280 
> of=/rhev/data-center/mnt/172.16.1.137\:_nas_EXPORT-DOMAIN/ansible_ova/test.img
>  oflag=direct conv=fsync
> 1280+0 records in
> 1280+0 records out
> 10737418240 bytes (11 GB) copied, 98.0172 s, 110 MB/s
>
> real 1m38.035s
> user 0m0.003s
> sys 0m2.366s
>
> So are you saying that after upgrading to 4.4.6 (or just released 4.4.7) I 
> should be able to export with this speed?

The preallocation part will run at the same speed, and then
you need to copy the used parts of the disk, time depending
on how much data is used.

>  Or anyway I do need NFS v4.2?

Without NFS 4.2. With NFS 4.2 the entire allocation will take less than
a second without consuming any network bandwidth.

> BTW: is there any capping put in place by oVirt to the export phase (the 
> qemu-img command in practice)? Designed for example not to perturbate the 
> activity of hypervisor?Or do you think that if I have a 10Gb/s network 
> backend and powerful disks on oVirt and powerful NFS server processing power  
> I should have much more speed?

We don't have any capping in place, usually people complain that copying
images is too slow.

In general when copying to file base storage we don't use -W option
(unordered writes) so copy will be slower compared with block based
storage, when qemu-img use 8 concurrent writes. So in a way we always
cap the copies to file based storage. To get maximum throughput you need
to run multiple copies at the same time.

Nir




reply via email to

[Prev in Thread] Current Thread [Next in Thread]