qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Fwd: [PATCH v2] vpc: Implement bdrv_co_get_block_statu


From: Kevin Wolf
Subject: Re: [Qemu-devel] [Fwd: [PATCH v2] vpc: Implement bdrv_co_get_block_status()]
Date: Thu, 19 Feb 2015 13:07:34 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

Am 18.02.2015 um 22:03 hat Peter Lieven geschrieben:
> Am 18.02.2015 um 21:57 schrieb Peter Lieven:
> > This implements bdrv_co_get_block_status() for VHD images. This can
> > significantly speed up qemu-img convert operation because only with this
> > function implemented sparseness can be considered. (Before, converting a
> > 1 TB empty image took several minutes for me, now it's instantaneous.)
> >
> > Signed-off-by: Kevin Wolf <address@hidden>
> > ---
> >  block/vpc.c | 50 ++++++++++++++++++++++++++++++++++++++++++++++++--
> >  1 file changed, 48 insertions(+), 2 deletions(-)
> >
> > diff --git a/block/vpc.c b/block/vpc.c
> > index 7fddbf0..1533b6a 100644
> > --- a/block/vpc.c
> > +++ b/block/vpc.c
> > @@ -597,6 +597,51 @@ static coroutine_fn int vpc_co_write(BlockDriverState
> > *bs, int64_t sector_num,
> >      return ret;
> >  }
> >
> > +static int64_t coroutine_fn vpc_co_get_block_status(BlockDriverState *bs,
> > +        int64_t sector_num, int nb_sectors, int *pnum)
> > +{
> > +    BDRVVPCState *s = bs->opaque;
> > +    VHDFooter *footer = (VHDFooter*) s->footer_buf;
> > +    int64_t start, offset, next;
> > +    bool allocated;
> > +    int n;
> > +
> > +    if (be32_to_cpu(footer->type) == VHD_FIXED) {
> > +        *pnum = nb_sectors;
> > +        return BDRV_BLOCK_RAW | BDRV_BLOCK_OFFSET_VALID | BDRV_BLOCK_DATA |
> > +               (sector_num << BDRV_SECTOR_BITS);
> > +    }
> > +
> > +    offset = get_sector_offset(bs, sector_num, 0);
> > +    start = offset;
> > +    allocated = (offset != -1);
> > +    *pnum = 0;
> > +
> > +    do {
> > +        /* All sectors in a block are contiguous (without using the
> > bitmap) */
> > +        n = ROUND_UP(sector_num + 1, s->block_size / BDRV_SECTOR_SIZE)
> > +          - sector_num;
> > +        n = MIN(n, nb_sectors);
> > +
> > +        *pnum += n;
> > +        sector_num += n;
> > +        nb_sectors -= n;
> > +        next = start + (*pnum * BDRV_SECTOR_SIZE);
> > +
> > +        if (nb_sectors == 0) {
> > +            break;
> > +        }
> > +
> > +        offset = get_sector_offset(bs, sector_num, 0);
> > +    } while ((allocated && offset == next) || (!allocated && offset == 
> > -1));
> > +
> > +    if (allocated) {
> > +        return BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID | start;
> > +    } else {
> > +        return 0;
> 
> Shouldn't this be
> 
>  return BDRV_BLOCK_ZERO;
> 
> ?
> 
> vpc_read memsets all blocks with offset == -1 to 0x00.

Yes, but the blocks are still unallocated, as opposed to allocated as
zero clusters, and this is indicated by 0.

vpc_get_info() sets bdi->unallocated_blocks_are_zero = true, so we end
up with bdrv_co_get_block_status() returning BDRV_BLOCK_ZERO, but not
BDRV_BLOCK_ALLOCATED (which would be set if we had BDRV_BLOCK_ZERO
here).

I'm not sure if a wrong allocated flag would cause problem currently,
but it's definitely necessary to get right once we add support for
differencing images (patches are on the list, pending review).

> Not for this patch, but couldn't we use your new function to signifincantly 
> speed up
> reading of continous allocated areas in vpc_read?

There aren't really contiguous blocks in VHD, you always have a bitmap
in between. In some cases it might be better to read the bitmap as well
as the two adjacent blocks and throw that buffer away in order to save
one read request, but with relatively large block sizes of VHD it's
probably not going to help that much.

It's also a question of whether we want to invest significant effort
into making vpc efficient enough for reasonably running a VM from it.
Our current assumption is that the support is mostly there for qemu-img
convert.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]