[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Checking for loss of information on integer conversion

From: Paul Eggert
Subject: Re: Checking for loss of information on integer conversion
Date: Sun, 18 Feb 2018 12:04:20 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0

Eli Zaretskii wrote:

Emacs Lisp is not used to write software that controls
aircraft and spaceships

Actually, I maintain Emacs Lisp code that controls timestamps used in aircraft and spaceships. I'm not saying that Emacs itself runs the aircraft and spaceships, but it definitely is used to develop software and data used there. As luck would have it, I'm currently engaged in an email thread about time transfer between Earth and Mars (yes, this is really a thing and people are trying to do it with millisecond precision) that is related to a project where I regularly use Emacs Lisp. See the thread containing this message:


More generally, why signaling an error by default in this case is a
good idea? ...  That would
be similar to behavior of equivalent constructs in C programs

Sure, and C compilers typically issue diagnostics for situations similar to what's in Bug#30408. For example, for this C program:

int a = 18446744073709553664;

GCC issues a diagnostic, whereas for the similar Emacs Lisp program:

(setq b 18446744073709553664)

Emacs silently substitutes a number that is off by 2048. It's the latter behavior that causes the sort of problem seen in Bug#30408.

When people write a floating-point number they naturally expect it to have some fuzz. But when they write an integer they expect it to be represented exactly, and not to be rounded. Emacs already reports an overflow error for the following code that attempts to use the same mathematical value:

(setq c #x10000000000000800)

so it's not like it would be a huge change to do something similar for decimal integers.

When Emacs was originally developed, its integers were typically 28 bits (not counting sign) and floating-point numbers could typically represent integers exactly up to 53 bits (not counting sign), so the old Emacs behavior was somewhat defensible: although it didn't do bignums, at least it could represent integers nearly twice as wide as fixnums. However, nowadays Emacs integers typically have more precision than floating point numbers, and the old Emacs behavior is more likely to lead to counterintuitive results such as those described in Bug#30408.

On thinking about it in the light of your comments, I suppose it's confusing that the proposal used a new signal 'inexact', whereas it should just signal overflow. After all, that's what string_to_number already does for out-of-range hexadecimal integers. That issue is easily fixed. Revised patch attached.

Attachment: 0001-Avoid-losing-info-when-converting-integers.patch
Description: Text Data

reply via email to

[Prev in Thread] Current Thread [Next in Thread]