[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v6 1/3] block: introduce compress filter driver
From: |
Kevin Wolf |
Subject: |
Re: [PATCH v6 1/3] block: introduce compress filter driver |
Date: |
Tue, 12 Nov 2019 10:39:12 +0100 |
User-agent: |
Mutt/1.12.1 (2019-06-15) |
Am 11.11.2019 um 17:04 hat Andrey Shinkevich geschrieben:
> Allow writing all the data compressed through the filter driver.
> The written data will be aligned by the cluster size.
> Based on the QEMU current implementation, that data can be written to
> unallocated clusters only. May be used for a backup job.
>
> Suggested-by: Max Reitz <address@hidden>
> Signed-off-by: Andrey Shinkevich <address@hidden>
> +static BlockDriver bdrv_compress = {
> + .format_name = "compress",
> +
> + .bdrv_open = zip_open,
> + .bdrv_child_perm = zip_child_perm,
Why do you call the functions zip_* when the driver is called compress?
I think zip would be a driver for zip archives, which we don't use here.
> + .bdrv_getlength = zip_getlength,
> + .bdrv_co_truncate = zip_co_truncate,
> +
> + .bdrv_co_preadv = zip_co_preadv,
> + .bdrv_co_preadv_part = zip_co_preadv_part,
> + .bdrv_co_pwritev = zip_co_pwritev,
> + .bdrv_co_pwritev_part = zip_co_pwritev_part,
If you implement .bdrv_co_preadv/pwritev_part, isn't the implementation
of .bdrv_co_preadv/pwritev (without _part) dead code?
> + .bdrv_co_pwrite_zeroes = zip_co_pwrite_zeroes,
> + .bdrv_co_pdiscard = zip_co_pdiscard,
> + .bdrv_refresh_limits = zip_refresh_limits,
> +
> + .bdrv_eject = zip_eject,
> + .bdrv_lock_medium = zip_lock_medium,
> +
> + .bdrv_co_block_status = bdrv_co_block_status_from_backing,
Why not use bs->file? (Well, apart from the still not merged filter
series by Max...)
> + .bdrv_recurse_is_first_non_filter = zip_recurse_is_first_non_filter,
> +
> + .has_variable_length = true,
> + .is_filter = true,
> +};
Kevin