qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Block alignment of qcow2 compress driver


From: Kevin Wolf
Subject: Re: Block alignment of qcow2 compress driver
Date: Fri, 28 Jan 2022 14:19:44 +0100

Am 28.01.2022 um 13:30 hat Hanna Reitz geschrieben:
> > > I just changed that line of code [2], as shown in [4].  I suppose
> > > the better thing to do would be to have an option for the NBD server
> > > to force-change the announced request alignment, because it can
> > > expect the qemu block layer code to auto-align requests through
> > > RMW.  Doing it in the client is wrong, because the NBD server might
> > > want to detect that the client sends unaligned requests and reject
> > > them (though ours doesn’t, it just traces such events[5] – note that
> > > it’s explicitly noted there that qemu will auto-align requests).
> > I know I said I didn't care about performance (in this case), but is
> > there in fact a penalty to sending unaligned requests to the qcow2
> > layer?  Or perhaps it cannot compress them?
> 
> In qcow2, only the whole cluster can be compressed, so writing compressed
> data means having to write the whole cluster.  qcow2 could implement the
> padding by itself, but we decided to just leave the burden of only writing
> full clusters (with the COMPRESSED write flag) on the callers.
> 
> Things like qemu-img convert and blockdev-backup just adhere to that by
> design; and the compress driver makes sure to set its request alignment
> accordingly so that requests to it will always be aligned to the cluster
> size (either by its user, or by the qemu block layer which performs the
> padding automatically).

I thought the more limiting factor would be that after auto-aligning the
first request by padding with zeros, the second request to the same
cluster would fail because compression doesn't allow using an already
allocated cluster:

    /* Compression can't overwrite anything. Fail if the cluster was already
     * allocated. */
    cluster_offset = get_l2_entry(s, l2_slice, l2_index);
    if (cluster_offset & L2E_OFFSET_MASK) {
        qcow2_cache_put(s->l2_table_cache, (void **) &l2_slice);
        return -EIO;
    }

Did you always just test a single request or why don't you run into
this?

I guess checking L2E_OFFSET_MASK is strictly speaking wrong because it's
invalid for compressed clusters (qcow2_get_cluster_type() feels more
appropriate), but in practice, you will always have non-zero data there,
so it should error out here.

Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]