[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] DMG chunk size independence

From: Kevin Wolf
Subject: Re: [Qemu-devel] DMG chunk size independence
Date: Tue, 18 Apr 2017 12:29:21 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 15.04.2017 um 10:38 hat Ashijeet Acharya geschrieben:
> Hi,
> Some of you are already aware but for the benefit of the open list,
> this mail is regarding the task mentioned
> Here -> http://wiki.qemu-project.org/ToDo/Block/DmgChunkSizeIndependence
> I had a chat with Fam regarding this and he suggested a solution where
> we fix the output buffer size to a max of say "64K" and keep inflating
> until we reach the end of the input stream. We extract the required
> data when we enter the desired range and discard the rest. Fam however
> termed this as only a  "quick fix".

You can cache the current position for a very easy optimisation of
sequential reads.

> The ideal fix would obviously be if we can somehow predict the exact
> location inside the compressed stream relative to the desired offset
> in the output decompressed stream, such as a specific sector in a
> chunk. Unfortunately this is not possible without doing a first pass
> over the decompressed stream as answered on the zlib FAQ page
> Here -> http://zlib.net/zlib_faq.html#faq28
> AFAICT after reading the zran.c example in zlib, the above mentioned
> ideal fix would ultimately lead us to decompress the whole chunk in
> steps at least once to maintain an access point lookup table. This
> solution is better if we get several random access requests over
> different read requests, otherwise it ends up being equal to the fix
> suggested by Fam plus some extra effort needed in building and
> maintaining access points.

I'm not sure if it's worth the additional effort.

If we take a step back, what the dmg driver is used for in practice is
converting images into a different format, that is strictly sequential
I/O. The important thing is that images with large chunk sizes can be
read at all, performance (with sequential I/O) is secondary, and
performance with random I/O is almost irrelevant, as far as I am


reply via email to

[Prev in Thread] Current Thread [Next in Thread]