[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v6 1/3] block: introduce compress filter driver
From: |
Andrey Shinkevich |
Subject: |
Re: [PATCH v6 1/3] block: introduce compress filter driver |
Date: |
Tue, 12 Nov 2019 10:07:47 +0000 |
On 12/11/2019 12:39, Kevin Wolf wrote:
> Am 11.11.2019 um 17:04 hat Andrey Shinkevich geschrieben:
>> Allow writing all the data compressed through the filter driver.
>> The written data will be aligned by the cluster size.
>> Based on the QEMU current implementation, that data can be written to
>> unallocated clusters only. May be used for a backup job.
>>
>> Suggested-by: Max Reitz <address@hidden>
>> Signed-off-by: Andrey Shinkevich <address@hidden>
>
>> +static BlockDriver bdrv_compress = {
>> + .format_name = "compress",
>> +
>> + .bdrv_open = zip_open,
>> + .bdrv_child_perm = zip_child_perm,
>
> Why do you call the functions zip_* when the driver is called compress?
> I think zip would be a driver for zip archives, which we don't use here.
>
Kevin,
Thanks for your response.
I was trying to make my mind up with a short form for 'compress'.
I will change the 'zip' for something like 'compr'.
>> + .bdrv_getlength = zip_getlength,
>> + .bdrv_co_truncate = zip_co_truncate,
>> +
>> + .bdrv_co_preadv = zip_co_preadv,
>> + .bdrv_co_preadv_part = zip_co_preadv_part,
>> + .bdrv_co_pwritev = zip_co_pwritev,
>> + .bdrv_co_pwritev_part = zip_co_pwritev_part,
>
> If you implement .bdrv_co_preadv/pwritev_part, isn't the implementation
> of .bdrv_co_preadv/pwritev (without _part) dead code?
>
Understood and will remove the dead path.
>> + .bdrv_co_pwrite_zeroes = zip_co_pwrite_zeroes,
>> + .bdrv_co_pdiscard = zip_co_pdiscard,
>> + .bdrv_refresh_limits = zip_refresh_limits,
>> +
>> + .bdrv_eject = zip_eject,
>> + .bdrv_lock_medium = zip_lock_medium,
>> +
>> + .bdrv_co_block_status = bdrv_co_block_status_from_backing,
>
> Why not use bs->file? (Well, apart from the still not merged filter
> series by Max...)
>
We need to keep a backing chain unbroken with the filter inserted. So,
the backing child should not be zero. It is necessary for the backup
job, for instance. When I initialized both children pointing to the same
node, the code didn't work properly. I have to reproduce it again to
tell you exactly what happened then and will appreciate your advice
about a proper design.
Andrey
>> + .bdrv_recurse_is_first_non_filter = zip_recurse_is_first_non_filter,
>> +
>> + .has_variable_length = true,
>> + .is_filter = true,
>> +};
>
> Kevin
>
--
With the best regards,
Andrey Shinkevich