[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lmi] Is bourn_cast demonstrably correct?

From: Greg Chicares
Subject: [lmi] Is bourn_cast demonstrably correct?
Date: Mon, 20 Mar 2017 02:22:22 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Icedove/45.6.0

bourn_cast is virtually identical to Kevlin Henney's original [the
quotation below shows his implementation of is_numeric_castable(); if
it returns true, then a static_cast is performed]:


  return !(arg < 0 && !result_traits::is_signed) &&   // loss of negative range
         !(arg_traits::is_signed && arg < result_traits::min()) && // underflow
         !(arg > result_traits::max());                            // overflow

which I think is truly beautiful. I changed the order of the tests in
the first parenthesized expression, because there's no reason not to
make it explicit that short-circuit evaluation is desired. And I
replaced std::numeric_traits::min() with lowest(), because that fixes
a problem with floating-point types. Those superficial changes aside,
I see no way to make it better for integral types. I tried hard to
improve it here:


and last week I tried designing a new implementation independently,
but I couldn't come up with anything shorter or faster, and couldn't
find any flaw in it for integral types.

So I was concerned when I read this:

| properly determining whether a source value is in range for a given
| destination type is tricky because involves comparing a value of a
| type with the boundary values of a different type, task that itself
| requires a numeric conversion. This is in fact so difficult that
| boost::numeric_cast<> had it wrong for certain cases for years, even
| after a number of revisions.

which claims that the proposed alternative has:

| fixed all the range-checking problems of the initial version

and I've been trying to find a testcase that would fail with Henney's
original implementation but pass with boost's "improved" version.

Unfortunately, boost's unit test
is extremely short; AFAICT, the only tests it has that lmi's unit test
lacks are:
  BOOST_CHECK( 0.0f == numeric_cast<float>( 0.0 ) );
  BOOST_CHECK( 0.0 == numeric_cast<double>( 0.0 ) );
which I omitted because I don't think it's worthwhile to make
bourn_cast work correctly with floating point.

I read every post referencing "numeric_cast" on the boost mailing list
since its inception, and didn't find any example showing that Henney's
code was wrong for integers. This post
says that this throws:
  double d = std::numeric_limits<double>::infinity();
where returning std::numeric_limits<float>::infinity() would have
been better. But that doesn't involve "comparing a value of a type
with the boundary values of a different type".

Finding no published counterexample for integral types, we fall back
on first principles and conduct a rigorous search, beginning with
two preliminary principles:

- We ignore floating-point types because they're outside the design
goals of bourn_cast and may well be forbidden in future; that means
we don't have to consider what happens if a NaN is compared to zero,
e.g., or whether (float) infinity equals (double) infinity.

- The anomalies we seek can only arise from comparison between signed
and unsigned quantities. Otherwise, the usual arithmetic conversions
[C++11, 5/9] convert the lower- to the higher-ranked type, which is a
strictly widening conversion, so comparisons behave intuitively. Call
this principle "X" for easy reference below, where we'll use it to
dismiss conditions that cannot possibly give an unintuitive result.

Now consider Henney's code, which performs only three comparisons:

  !(arg < 0 && !result_traits::is_signed) &&   // loss of negative range
  !(arg_traits::is_signed && arg < result_traits::min()) && // underflow
  !(arg > result_traits::max());                            // overflow

On second thought, let's use the comparisons in bourn_cast instead,
because it seems clearer here to discuss conditions that throw than
negated conditions that don't:
(1) if(! to_traits::is_signed && from < 0) throw ...
(2) if(from_traits::is_signed && from < to_traits::lowest()) throw ...
(3) if(to_traits::max() < from) throw ...

Taking each condition in order:

(1) Preconditions: type To is unsigned (explicitly stated); therefore,
we may assume type From is signed, by principle "X".

Both 'from' and '0' are signed, so "from < 0" converts either the LHS
or the RHS to a wider signed type. No anomaly can occur.

Postcondition: 'from' is nonnegative--otherwise an exception must have
been thrown. (Or 'from' is negative and type To is signed, so we have
a signed-to-signed conversion that we can ignore by principle "X".
IOW if a range-comparison anomaly exists, then 'from' is nonnegative.)

(2) Preconditions: type From is signed (explicitly stated); therefore,
we may assume type To is unsigned, by principle "X". Furthermore, by
the postcondition of (1) above, 'from' is nonnegative.

Now to_traits::lowest() must be an unsigned zero, because zero is the
lowest representable value of all unsigned integral types. This zero
is to be compared to the signed, nonnegative 'from' value. The usual
arithmetic conversions apply, and there are three cases [5/9]:
 (a) unsigned rank greater: convert signed value to unsigned type
 (b) signed type can represent the entire unsigned range: convert
     unsigned value to the signed type
 (c) otherwise: convert both to unsigned type corresponding to the
     signed type
Case (b) can never entail any loss of range, so it's always okay. In
cases (a) and (c), 'from' is converted to an unsigned type that must
be wide enough to preserve its (nonnegative) value. In any case, no
anomaly can occur.

(3) Preconditions: by postcondition (1) above, 'from' is nonnegative.

Clearly to_traits::max() is positive, even for type bool. Now the
discussion of (2) above establishes also that comparing a nonnegative
'from' with this positive maximum will return the intuitive result.

I speculate that this argument can be faithfully summarized by saying
that for integer comparison to behave counterintuitively, the types
must be of different signednesses and one must be negative, but there
is no path through the code where that can happen.

If I've reasoned this through correctly, then Cacciola's conclusion
can apply only to floating point, and Henney's code is flawless for
integral types.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]