|
From: | Paul Eggert |
Subject: | Re: Making --with-wide-int the default |
Date: | Fri, 16 Oct 2015 08:29:11 -0700 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0 |
David Kastrup wrote:
I don't it's sensible to configure a non-native default turning everything into multiple-register operations and obliterating compact data structures matching the architecture's choices.
That's what I was worried about too, before I implemented --with-wide-int. But it turned out to not be a problem. Performance is a bit worse, but I have to measure it to see it. This can be surprising, until you try it.
For example, on 32-bit x86, the hot path (cdr of a cons) for the Fcdr function is 9 instructions with a narrow int, and 11 instructions --with-wide-int. Most of those 9 instructions are call overhead and bit-twiddling for runtime tests, and these are the same either way. --with-wide-int causes Fcdr to need one extra instruction to load the extra word of the argument, and one extra instruction to load the extra word of the result, and that's it.
If you're interested in squeezing out more performance on a --with-wide-int configuration, you can try the x32 ABI. E.g., see <https://wiki.debian.org/X32Port> for Debian or Ubuntu. I haven't bothered, though, as x86 is good enough and works everywhere.
Of course I'd rather have something with GMP and bignums. But that's considerably more work than --with-wide-int.
a GMP number should be converted back to a native LISP integer whenever it's small enough again.
Obviously. And one can even work around the = vs eq problem for larger integers. But these are things that are still on the drawing board. --with-wide-int works now.
[Prev in Thread] | Current Thread | [Next in Thread] |