qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/2] block/vdi: Fix bmap writing error


From: Max Reitz
Subject: Re: [Qemu-devel] [PATCH 0/2] block/vdi: Fix bmap writing error
Date: Thu, 23 Oct 2014 09:07:20 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.2.0

On 2014-10-22 at 13:56, Kevin Wolf wrote:
Am 21.10.2014 um 10:51 hat Max Reitz geschrieben:
The bmap size in block/vdi.c may exceed INT_MAX. Using
bdrv_pwrite_sync() (which takes an int byte count) is therefore not a
good idea. The second patch of this series fixes this by replacing
bdrv_pwrite_sync() by bdrv_write()+bdrv_flush() (we don't need the p in
pwrite here).

The first patch employs ROUND_UP() and DIV_ROUND_UP() in block/vdi.c, so
you are reminded that bmap_size is aligned to BDRV_SECTOR_SIZE for the
second patch.

See https://bugzilla.redhat.com/show_bug.cgi?id=1154940 for a bug
report.

I will not include an iotest in this series because this would require
qemu to allocate and then write about 2G of data; yes, test 1 in 084
fails for me because qemu cannot allocate 4G for the bmap.

In fact, I can only test this once I'm home where I have more RAM
available (I made the mistake of activating swap space to test this only
once).
Thanks, applied to the block branch.

So I tested it yesterday and it turns out it does not fix the bug. I'm sorry, could you unapply patch 2?

The reason it doesn't work is, as you personally said to me yesterday, that bdrv_write() goes through the bdrv_pwrite() path as well. Well, it does not really, but it does test whether nb_sectors > INT_MAX / BDRV_SECTOR_SIZE. Therefore, writing the bitmap still fails with EINVAL (for images >= 512 TB).

Now, we could either actually fix the VDI code; or we simply limit the maximum image size to half what it is now. I don't see a problem in doing the latter, since qemu already does not support all image sizes, so there's no reason why we should not limit the size even further. Also, 512 TB seems plenty today.

So unless someone is against this, I'll adjust VDI_BLOCKS_IN_IMAGE to be INT_MAX / sizeof(uint32_t) (it's currently UINT32_MAX / sizeof(uint32_t) == 0x3fffffff). This will be problematic if sizeof(int) > sizeof(uint32_t), but I doubt that'll happen soon, if at all.

Max



reply via email to

[Prev in Thread] Current Thread [Next in Thread]