We would like to use more unsigned 32-bit types in our code (e.g., for hardware modeling), but we've noticed an inefficiency in the compilation of unsigned 32-bit arithmetic, relative to signed 32-bit types, using GCL 2.6.1.
Consider the following:
(defun +<s32> (x y)
(declare (type (signed-byte 32) x)
(type (signed-byte 32) y))
(the (signed-byte 32) (+ (the (signed-byte 32) x)
(the (signed-byte 32) y))))
(defun +<u32> (x y)
(declare (type (unsigned-byte 32) x)
(type (unsigned-byte 32) y))
(the (unsigned-byte 32) (+ (the (unsigned-byte 32) x)
(the (unsigned-byte 32) y))))
Note that the only difference between +<s32> and +<u32> is that +<s32> declares its arguments and result to be 32-bit signed integers and +<u32> declares them to be 32-bit unsigned integers. When we compile to C, this is what we get (in part):
The thing to notice here is that L1, which is the function definition for +<s32>, represents v1 and v2 as longs and uses C's native + function to add the arguments, but L2 (the function definition for +<u32>) uses make_integer() and number_plus() instead. We're pretty sure that this means that GCL uses bignums for 32-bit unsigned integers instead of using C's unsigned long type and performing the addition using C's +. Is this the case? If so, is there a reason why the GCL compiler behaves this way? We should also note that for unsigned *31*-bit types, make_integer() is *not* called on the addends, and C's native + *is* used.
This problem is a big deal to us because we're doing millions of these sorts of calculations in the course of a given hardware simulation run.
P.S. It would also be nice if GCL knew about 64-bit native types. The last I knew, "long long" wasn't part of the ANSI C standard, but it's well-supported in GCC, as well as in other C compilers.