gnustep-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Compatibility breakage involved in upgrading to the MacOS-X 10.5 API


From: Richard Frith-Macdonald
Subject: Re: Compatibility breakage involved in upgrading to the MacOS-X 10.5 API
Date: Sun, 22 Feb 2009 13:44:16 +0000


On 22 Feb 2009, at 12:50, David Chisnall wrote:

On 22 Feb 2009, at 09:55, Richard Frith-Macdonald wrote:

Obviously that breaks binary compatibility on 64bit systesm, but perhaps less obviously it also breaks source code compatibility in quite a few places (wherever the API changes from passing a pointer to a 32bit integer to now be passing a pointer to a 64bit integer), and will cause compiler warnings wherever we assign a 64bit integer to a 32bit one.

Can we defer this problem by defaulting to #defining NSInteger as intptr_t and NSUInteger as uintptr_t, but provide a GS_LEGACY_64 mode that compiles them as int and unsigned int and emits a warning? That way new code can use NSInteger and NSUInteger on 64- bit platforms in either mode, as long as all packages are compiled with the same options. Then, once all existing code has been modified to use the new types turn the flag off and break the ABI once?

Yes we could ... my only reservation about doing that is, it's likely to end up with only one or two people actually using the new version ... so testing would be incomplete, and when we changed to the new mode we would discover a whole lot of breakage anyway, thus negating the point of doing it. It really depends on how many people are willing to commit to testing the change on 64bit systems.

I suspect the bigger problem will be the CGFloat type, which is now used all over Cocoa. I really don't understand the reason for this change. It's float on 32-bit and 64-bit on 64-bit platforms, which almost sounds sensible until you remember that 32-bit and 64-bit floats are both computed using the 80-bit x87 unit on 32-bit x86 and so are the same speed, but are computed with the SSE unit on x86-64 and so calculations on doubles are often slower than the equivalent calculations on floats. If anything, the opposite definitions would make more sense for the architectures that Apple supports (especially since most GPUs still can't handle doubles sensibly, and a lot of the geometry calculations that use these types will probably end up being offloaded to the GPU in future versions).

Interesting, I didn't realise that CGFloat actually resulted in poorer performance on 64bit systems.
I guess we have to go ahead and change to match Apple anyway though :-(





reply via email to

[Prev in Thread] Current Thread [Next in Thread]