[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Libcdio-devel] [RFC/PATCH] add multi extent ISO9660 file support to

From: Pete Batard
Subject: Re: [Libcdio-devel] [RFC/PATCH] add multi extent ISO9660 file support to libcdio
Date: Wed, 6 Jun 2018 22:52:07 +0100
User-agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0

On 2018.06.06 21:00, Thomas Schmitt wrote:
I think that is unfortunate to break the old examples (and the API).

Yes. That's why this is an RFC, as I mostly picked what I did in Rufus, where I didn't care about breaking the API. I was indeed hoping that we could improve on my first approach.

Maybe we regret it later, but how about staying API compatible by
keeping the old struct members and adding new ones for possible more
extents ?

That's one possibility, though, as you pointed out, this adds some extra complexity (though not an insurmountable amount).

My fear however is that downstream libcdio users will stick with the old API, and not give a second thought about extents as they should, unless we force their hand a little. So I have some preference for your second option, that flags the current lsn/size/secsize as in the process of being deprecated.

However, I'll be happy to go with that option if that's what the majority votes for.

Or: If we are willing to waste 12 bytes per struct, then we could put
the first extent's info into the [0] elements of the new arrays.

This I like better. I think that would give us the best of both worlds, while avoiding the clunky "this weird illogical stuff *must* be kept for compatibility reasons" that tends to plague libraries as they grow older. Having something that we can deprecate the day it has overstayed its usefulness, regardless of how far away that day might be, always has my vote.
I wonder why there is

   uint32_t secsize; /**< number of sectors allocated for first extent */

It gets set in lib/iso9660/iso9660_fs.c by

   p_stat->secsize = _cdio_len2blocks (p_stat->size, ISO_BLOCKSIZE);

which computes the number of blocks of the extent from the number
of bytes.
We could define macros or functions
   CDIO_EXTENT_BLOCKS(size) ... to give the number of blocks
   CDIO_EXTENT_BBYTES(size) ... to give the rounded-up number of bytes
and use them where currently
   secsize[...] * ISO_BLOCKSIZE
are used.

No objection to that.

The new API exposes ISO_MAX_MULTIEXTENT to the user.
This will break the ABI if we ever increase the number.

But currently i have no good proposal which avoids the need for a
destructor function of iso9660_stat_t.

Yeah. I would have liked to have something that isn't hardcoded, but I couldn't come up with a good non-disruptive solution.

Shouldn't example/isolist.c show the multiple extents as one single
file ?

This was actually an additional point I wanted to make in my initial e-mail but that I forgot, i.e. that I hadn't patched isolist yet so that it would display multiextent files as separate entries like this:

./isolist /c/Downloads/file_of_4gb.iso
Preparer : XORRISO-1.2.2 2012.04.02.133001, LIBISOBURN-1.2.2, LIBISOFS-1.2.2, LIBBURN-1.2.2
Volume     : ISOIMAGE
d [LSN     50]     2048 /.
d [LSN     50]     2048 /..
- [LSN 2098231]        4 /aaa
- [LSN     55] 4294965248 /file_of_4gb
- [LSN 2097206]  2099200 /file_of_4gb
- [LSN 2098232]        4 /zzz

Maybe with a column which tells the number of extents, and one that
tells whether the bytes of all extents form a single array without

Yeah. I had the idea of adding an extra optional note that would list the number of extents if the file has more than one, such as:

- [LSN 2098231]        4 /aaa
- [LSN     55] 4297064448 /file_of_4gb [2 extents]
- [LSN 2098232]        4 /zzz

But I'm open to other ideas. I do think that notifying about the gaps would be a bit of an overkill for isolist though...

At this stage, while still waiting for additional feedback if there is any, I think I'll have a stab at your option 2 and the CDIO_EXTENT_### macros sometime next week, and try to submit a new proposal in a week or two.

At any rate, thanks a lot for your input, which proves very valuable yet again!



reply via email to

[Prev in Thread] Current Thread [Next in Thread]