[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
## Re: Removing some workarounds for big integers

**From**: |
Philipp Stephani |

**Subject**: |
Re: Removing some workarounds for big integers |

**Date**: |
Sat, 1 Aug 2020 22:09:32 +0200 |

Am Mo., 22. Apr. 2019 um 20:45 Uhr schrieb Paul Eggert <eggert@cs.ucla.edu>:
>
>* On 4/22/19 9:59 AM, Philipp Stephani wrote:*
>* >*
>* >> +#define INTEGER_TO_INT(num, type) *
>* >> \*
>* >> + (TYPE_SIGNED (type) *
>* >> \*
>* >> + ? ranged_integer_to_int ((num), TYPE_MINIMUM (type), TYPE_MAXIMUM *
>* >> (type)) \*
>* >> + : ranged_integer_to_uint ((num), TYPE_MINIMUM (type)))*
>* >> ^^^^^^^^^^^^*
>* >> This should be TYPE_MAXIMUM.*
>* >*
>* > Thanks, fixed*
>* >*
>* More important, INTEGER_TO_INT's type conversion messes up and can cause*
>* a signal on picky platforms.*
How so?
>* Although lisp.lh really does need a better API for integer*
>* conversion (and I'll volunteer to review/help if someone wants to work*
>* on that!), INTEGER_TO_INT is not a step in the right direction; we need*
>* something more general/accurate/whatever.*
I don't really understand. What could be more general than converting
to an arbitrary integer type? Also, how could the conversion be more
accurate than representing each representable value and signaling an
error otherwise?

**Re: Removing some workarounds for big integers**,
*Philipp Stephani* **<=**