qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 12/21] block: define get_block_status return


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH v5 12/21] block: define get_block_status return value
Date: Mon, 5 May 2014 16:34:12 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 04.09.2013 um 19:00 hat Paolo Bonzini geschrieben:
> Define the return value of get_block_status.  Bits 0, 1, 2 and 9-62
> are valid; bit 63 (the sign bit) is reserved for errors.  Bits 3-8
> are left for future extensions.
> 
> The return code is compatible with the old is_allocated API: if a driver
> only returns 0 or 1 (aka BDRV_BLOCK_DATA) like is_allocated used to,
> clients of is_allocated will not have any change in behavior.  Still,
> we will return more precise information in the next patches and the
> new definition of bdrv_is_allocated is already prepared for this.
> 
> Reviewed-by: Eric Blake <address@hidden>
> Signed-off-by: Paolo Bonzini <address@hidden>
> ---
>  block.c               | 10 ++++++++--
>  include/block/block.h | 26 ++++++++++++++++++++++++++
>  2 files changed, 34 insertions(+), 2 deletions(-)
> 
> diff --git a/block.c b/block.c
> index 47a5d63..0eb5e43 100644
> --- a/block.c
> +++ b/block.c
> @@ -3073,7 +3073,7 @@ static int64_t coroutine_fn 
> bdrv_co_get_block_status(BlockDriverState *bs,
>  
>      if (!bs->drv->bdrv_co_get_block_status) {
>          *pnum = nb_sectors;
> -        return 1;
> +        return BDRV_BLOCK_DATA;
>      }
>  
>      return bs->drv->bdrv_co_get_block_status(bs, sector_num, nb_sectors, 
> pnum);
> @@ -3123,7 +3123,13 @@ int64_t bdrv_get_block_status(BlockDriverState *bs, 
> int64_t sector_num,
>  int coroutine_fn bdrv_is_allocated(BlockDriverState *bs, int64_t sector_num,
>                                     int nb_sectors, int *pnum)
>  {
> -    return bdrv_get_block_status(bs, sector_num, nb_sectors, pnum);
> +    int64_t ret = bdrv_get_block_status(bs, sector_num, nb_sectors, pnum);
> +    if (ret < 0) {
> +        return ret;
> +    }
> +    return
> +        (ret & BDRV_BLOCK_DATA) ||
> +        ((ret & BDRV_BLOCK_ZERO) && !bdrv_has_zero_init(bs));
>  }

I'm afraid that the assumptions eventually became too clever:

$ ./qemu-img create -f qcow2 /tmp/backing.qcow2 1G
Formatting '/tmp/backing.qcow2', fmt=qcow2 size=1073741824 encryption=off 
cluster_size=65536 lazy_refcounts=off 
$ ./qemu-img create -f qcow2 -b /tmp/backing.qcow2 /tmp/overlay.qcow2 2G
Formatting '/tmp/overlay.qcow2', fmt=qcow2 size=2147483648 
backing_file='/tmp/backing.qcow2' encryption=off cluster_size=65536 
lazy_refcounts=off 
$ ./qemu-io -c 'alloc 1G 64k' /tmp/overlay.qcow2
65536/65536 sectors allocated at offset 1 GiB

BDRV_BLOCK_ZERO is set for everything past the end of the backing file
(which kind of makes sense), and bdrv_has_zero_init() returns false for
anything that has a backing file (which makes sense, too), so now we're
reporting all sectors past the end of the backing file as allocated
(which is bad).

I think we're missing the information whether the BDRV_BLOCK_ZERO flag
actually comes from bs itself or from the backing chain. Do we need
another flag or does someone have a better idea?

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]