|
From: | Anthony Liguori |
Subject: | Re: [Qemu-devel] [RFC] qed: Add QEMU Enhanced Disk format |
Date: | Wed, 08 Sep 2010 13:35:25 -0500 |
User-agent: | Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.11) Gecko/20100713 Lightning/1.0b1 Thunderbird/3.0.6 |
On 09/08/2010 01:24 PM, Blue Swirl wrote:
Based on these: #define TABLE_NOFFSETS (table_size * cluster_size / sizeof(uint64_t)) header.image_size<= TABLE_NOFFSETS * TABLE_NOFFSETS * header.cluster_size, the maximum image size equals to table_size^2 * cluster_size^3 / sizeof(uint64_t)^2. Is the squaring and cubing of the terms beneficial? I mean, the size scales up fast to unusable numbers, whereas with a more linear equation (for example, allow different L1 and L2 sizes), more values could be actually usable. Again, I'm not sure if this matters at all. I think the minimum size should be table_size = 1, cluster_size = 4 bytes, 1^2 * 4^3 / 8^2 = 2 bytes, or is the minimum bigger? What's the minimum for cluster_size?
4k.The smallest image size is 1GB. There is no upper limit on image size because clusters can be arbitrarily large.
It shouldn't matter since any header that is>=16 TB means something mutated, escaped the lab, and is terrorizing the world as a qed monster image.In the Wiki version this has changed to header_size in clusters. With 2GB clusters, there will be some wasted bits.
2GB clusters would waste an awful lot of space regardless. I don't think it's useful to have clusters that large.
By the way, perhaps cluster_size of 0 should mean 4GB? Or maybe all sizes should be expressed as an exponent to 2, then 16 bits would allow cluster sizes up to 2^64?
I don't think cluster sizes much greater than 64k actually make sense. We don't need an image format that supports > 1PB disks.
Regards, Anthony Liguori
[Prev in Thread] | Current Thread | [Next in Thread] |