[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] [PATCH v1 0/8] Refactor DMG driver to have chunk size indep
[Qemu-devel] [PATCH v1 0/8] Refactor DMG driver to have chunk size independence
Wed, 26 Apr 2017 01:29:03 +0530
This series helps to provide chunk size independence for DMG driver to prevent
denial-of-service in cases where untrusted files are being accessed by the user.
This task is mentioned on the public block ToDo
Here -> http://wiki.qemu.org/ToDo/Block/DmgChunkSizeIndependence
Patch 1 introduces a new data structure to aid caching of random access points
within a compressed stream.
Patch 2 is an extension of patch 1 and introduces a new function to
initialize/update/reset our cached random access point.
Patch 3 limits the output buffer size to a max of 2MB to avoid QEMU allocate
huge amounts of memory.
Patch 4 is a simple preparatory patch to aid handling of various types of
Patches 5 & 6 help to handle various types of chunks.
Patch 7 simply refactors dmg_co_preadv() to read multiple sectors at once.
Patch 8 finally removes the error messages QEMU used to throw when an image with
chunk sizes above 64MB were accessed by the user.
Convert a DMG file to raw format using the "qemu-img convert" tool present in
Next convert the same image again after applying these patches.
Compare the two images using "qemu-img compare" tool to check if they are
You can pickup any DMG image from the collection present
Here -> https://lists.gnu.org/archive/html/qemu-devel/2014-12/msg03606.html
These patches assume that the terms "chunk" and "block" are synonyms of each
when we talk about bz2 compressed streams. Thus according to the bz2 docs,
the max uncompressed size of a chunk/block can reach to 46MB which is less than
the previously allowed size of 64MB, so we can continue decompressing the whole
chunk/block at once instead of partial decompression just like we do now.
This limitation was forced by the fact that bz2 compressed streams do not allow
random access midway through a chunk/block as the BZ2_bzDecompress() API in
seeks for the magic key "BZh" before starting decompression. This magic key
present at the start of every chunk/block only and since our cached random
points need not necessarily point to the start of a chunk/block,
fails with an error value BZ_DATA_ERROR_MAGIC
Special thanks to Peter Wu for helping me understand and tackle the bz2
Ashijeet Acharya (8):
dmg: Introduce a new struct to cache random access points
dmg: New function to help us cache random access point
dmg: Limit the output buffer size to a max of 2MB
dmg: Refactor and prepare dmg_read_chunk() to cache random access
dmg: Handle zlib compressed chunks
dmg: Handle bz2 compressed/raw/zeroed chunks
dmg: Refactor dmg_co_preadv() to start reading multiple sectors
dmg: Remove the error messages to allow wild images
block/dmg.c | 214 +++++++++++++++++++++++++++++++++++++++---------------------
block/dmg.h | 10 +++
2 files changed, 148 insertions(+), 76 deletions(-)