[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH] qcow2: Add bdrv_discard support
From: |
Stefan Hajnoczi |
Subject: |
Re: [Qemu-devel] [PATCH] qcow2: Add bdrv_discard support |
Date: |
Fri, 28 Jan 2011 09:57:05 +0000 |
User-agent: |
Mutt/1.5.20 (2009-06-14) |
On Thu, Jan 27, 2011 at 01:40:21PM +0100, Kevin Wolf wrote:
> +/*
> + * This discards as many clusters of nb_clusters as possible at once (i.e.
> + * all clusters in the same L2 table) and returns the number of discarded
> + * clusters.
> + */
> +static int discard_single_l2(BlockDriverState *bs, uint64_t offset,
> + unsigned int nb_clusters)
> +{
> + BDRVQcowState *s = bs->opaque;
> + uint64_t l2_offset, *l2_table;
> + int l2_index;
> + int ret;
> + int i;
> +
> + ret = get_cluster_table(bs, offset, &l2_table, &l2_offset, &l2_index);
> + if (ret < 0) {
> + return ret;
> + }
> +
> + /* Limit nb_clusters to one L2 table */
> + nb_clusters = MIN(nb_clusters, s->l2_size - l2_index);
> +
> + for (i = 0; i < nb_clusters; i++) {
> + uint64_t old_offset;
> +
> + old_offset = be64_to_cpu(l2_table[l2_index + i]);
> + old_offset &= ~QCOW_OFLAG_COPIED;
> +
> + if (old_offset == 0) {
> + continue;
> + }
> +
> + /* First remove L2 entries */
> + qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_table);
> + l2_table[l2_index + i] = cpu_to_be64(0);
> +
> + /* Then decrease the refcount */
> + qcow2_free_any_clusters(bs, old_offset, 1);
> + }
> +
> + ret = qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);
> + if (ret < 0) {
> + return ret;
> + }
There is no loop to continue discards across L2 boundaries. Guests
could use discard on the entire disk from an installer, for example.
> +
> + return nb_clusters;
> +}
> +
> +int qcow2_discard_clusters(BlockDriverState *bs, uint64_t offset,
> + int nb_sectors)
qcow2_discard_sectors() since units are in sectors not clusters?
> +{
> + BDRVQcowState *s = bs->opaque;
> + uint64_t end_offset;
> + unsigned int nb_clusters;
> + int ret;
> +
When offset=0x10200, nb_sectors=1, and cluster_size=65536...
> + end_offset = offset + (nb_sectors << BDRV_SECTOR_BITS);
> +
> + /* Round start up and end down */
> + offset = align_offset(offset, s->cluster_size);
> + end_offset &= ~(s->cluster_size - 1);
offset=0x20000
end_offset=0x10000
> +
> + nb_clusters = size_to_clusters(s, end_offset - offset);
nb_clusters=4294967295
...and the loop will discard almost 256TB of data. We need to check
against overflow/underflow or do this in the block layer.
Stefan