qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v2] Specification for qcow2 version 3


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [RFC PATCH v2] Specification for qcow2 version 3
Date: Thu, 13 Oct 2011 15:43:04 +0100

On Wed, Oct 12, 2011 at 3:58 PM, Kevin Wolf <address@hidden> wrote:
> Am 12.10.2011 16:37, schrieb Stefan Hajnoczi:
>> On Wed, Oct 12, 2011 at 2:31 PM, Kevin Wolf <address@hidden> wrote:
>>> Am 12.10.2011 14:51, schrieb Stefan Hajnoczi:
>>>>> Also a bit in l2 offset to say "there is no l2 table" cause all
>>>>> clusters in l2 are contiguous so we avoid entirely l2. Obviously this
>>>>> require an optimization step to detect or create such condition.
>>>>
>>>> There are several reserved L1 entry bits which could be used to mark
>>>> this mode.  This mode severely restricts qcow2 features though: how
>>>> would snapshots and COW work?  Perhaps by breaking the huge cluster
>>>> back into an L2 table with individual clusters?  Backing files also
>>>> cannot be used - unless we extend the sub-clusters approach and also
>>>> keep a large bitmap with allocated/unallocated/zero information.
>>>>
>>>> A mode like this could be used for best performance on local storage,
>>>> where efficiently image transport (e.g. scp or http) is not required.
>>>> Actually I think this is reasonable, we could use qemu-img convert to
>>>> produce a compact qcow2 for export and use the L2-less qcow2 for
>>>> running the actual VM.
>>>>
>>>> Kevin: what do you think about fleshing out this mode instead of 
>>>> sub-clusters?
>>>
>>> I'm hesitant to something like this as it adds quite some complexity and
>>> I'm not sure if there are practical use cases for it at all.
>>>
>>> If you take the current cluster sizes, an L2 table contains 512 MB of
>>> data, so you would lose any sparseness. You would probably already get
>>> full allocation just by creating a file system on the image.
>>>
>>> But even if you do have a use case where sparseness doesn't matter, the
>>> effect is very much the same as allowing a 512 MB cluster size and not
>>> changing any of the qcow2 internals.
>>
>> I guess I'm thinking of the 512 MB cluster size situation, because
>> we'd definitely want a cow bitmap in order to keep backing files and
>> sparseness.
>>
>>> (What would the use case be? Backing files or snapshots with a COW
>>> granularity of 512 MB isn't going to fly. That leaves only something
>>> like encryption.)
>>
>> COW granularity needs to stay at 64-256 kb since those are reasonable
>> request sizes for COW.
>
> But how do you do that without L2 tables? What you're describing
> (different sizes for allocation and COW) is exactly what subclusters are
> doing. I can't see how switching to 512 MB clusters and a single-level
> table can make that work.

Yes, very large sub-clusters are likely to provide the best performance:

1. The refcounts are incremented in a single operation when the large
cluster is allocated.
2. COW still works on smaller granularity so allocating a large
cluster does not require zeroing data.
3. Writes simply need to update the COW bitmap, no refcount updates
are required.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]