[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Octave-bug-tracker] [bug #54572] int64 does not saturate correctly in n
From: |
Dan Sebald |
Subject: |
[Octave-bug-tracker] [bug #54572] int64 does not saturate correctly in negative direction |
Date: |
Fri, 31 Aug 2018 21:49:56 -0400 (EDT) |
User-agent: |
Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0 |
Follow-up Comment #44, bug #54572 (project octave):
I looked into performance of multiplication yesterday and a bit today. The
division-based algorithms I tried simply slowed things down, by a factor of
two basically, on my processor. Divisions have always required extra steps
and repetitive bit shifting; it's actually too much computational power. For
the sake of record, I'm attaching the algorithms I tried as a diff, but it
isn't useful for anything.
The last alternative in that code (if one looks) is the use of
__builtin_mul_overflow(). The existing algorithm and the builtin optimization
are actually quite close. I took a second look at the current algorithm, and
it is not using a post-operation on signed arithmetic. So, it really isn't
relying on any signed behavior, unlike what the add/sub fast algorithms were
doing. It's quite nice really, as it is just doing the aritmetic as (upper_x
* 2^32 + lower_x) * lower_y with some observations about binary arithmetic as
regards overflow.
I was a bit concerned the routine is doing:
// Essentially, what we do is compute sign, multiply absolute values
// (as above) and impose the sign.
uint64_t usx = octave_int_abs (x);
uint64_t usy = octave_int_abs (y);
and that absolute value might turn MIN_VAL into MAX_VAL (i.e., -N becomes N-1
rather than N for two's complement), but I couldn't seem to find any aberrant
result around that value.
Out of curiosity, I wondered how efficient the use of LONG_DOUBLE is so I
activated it with the following mod:
/* # undef OCTAVE_ENSURE_LONG_DOUBLE_OPERATIONS_ARE_NOT_TRUNCATED */
# define OCTAVE_INT_USE_LONG_DOUBLE
Here are timing comparisons between the two: The current word-wise
decomposition of multiplication versus using LONG DOUBLE (regardless of
whether the criteria is met for computations on my system to be correct):
50 TRIALS
64-BIT SIGNED MULT
CURRENT_MULT D_FLOAT_MULT
tgood 16.784 16.632
tover 13.592 13.296
32-BIT SIGNED MULT
CURRENT_MULT D_FLOAT_MULT
tgood 7.4400 7.4120
tover 6.3520 6.4160
16-BIT SIGNED MULT
CURRENT_MULT D_FLOAT_MULT
tgood 4.3400 4.4000
tover 3.7720 5.2840
8-BIT SIGNED MULT
CURRENT_MULT D_FLOAT_MULT
tgood 2.6280 1.9560
tover 2.5840 3.7680
OBSERVATIONS
I tried making the test as apples-to-apples as possible. Multiplication in
both of the cases becomes slower for larger numbers. The current algorithm
can skip one of the integer mults and accumulates when the numbers are small.
The floating point implementation is probably an internal firmware reason in
that it can terminate the accumulation when it runs out of bits. For that
reason, I chose a fairly big number that doesn't overflow:
w = int64(ones(5000));
x = w; x(:) = int64(sqrt(double(intmax('int64'))) * 0.9);
y = w;
start = cputime; for i=[1:50]; z = x .* y; endfor; tgood = cputime - start
z(1:2)
x = w; x(:) = intmax('int64');
y = x;
start = cputime; for i=[1:50]; z = x .* y; endfor; tover = cputime - start
z(1:2)
tover / tgood
clear w x y z
w = int32(ones(5000));
x = w; x(:) = int32(sqrt(double(intmax('int32'))) * 0.9);
y = w;
start = cputime; for i=[1:50]; z = x .* y; endfor; tgood = cputime - start
z(1:2)
x = w; x(:) = intmax('int32');
y = x;
start = cputime; for i=[1:50]; z = x .* y; endfor; tover = cputime - start
z(1:2)
tover / tgood
clear w x y z
w = int16(ones(5000));
x = w; x(:) = int16(sqrt(double(intmax('int16'))) * 0.9);
y = w;
start = cputime; for i=[1:50]; z = x .* y; endfor; tgood = cputime - start
z(1:2)
x = w; x(:) = intmax('int16');
y = x;
start = cputime; for i=[1:50]; z = x .* y; endfor; tover = cputime - start
z(1:2)
tover / tgood
clear w x y z
w = int8(ones(5000));
x = w; x(:) = int8(sqrt(double(intmax('int16'))) * 0.9);
y = w;
start = cputime; for i=[1:50]; z = x .* y; endfor; tgood = cputime - start
z(1:2)
x = w; x(:) = intmax('int8');
y = x;
start = cputime; for i=[1:50]; z = x .* y; endfor; tover = cputime - start
z(1:2)
tover / tgood
clear w x y z
Judging from the numbers, for int32 and int64, the two approaches are almost a
dead heat. However, for the smaller non-natural CPU widths, the use of
floating point has an obvious speed-up. Could it be that the optimizing
compiler is able to make use of CPU features such as being able to two
multiplies at once for shorter data widths?
On the other hand, for the smaller data widths, the floating point becomes
much worse. This is probably because the current approach can test early for
certain overflow, whereas maybe the firmware floating point approach can't do
that. Say it IS doing two multiplies at once; well in that case it has to
carry both multiplies through all the way rather than do a quick test
beforehand of none, one or both multiplies overflowing. In some sense, the
overflow condition isn't super important; if the user's mults are all
overflowing, there's a bigger problem.
Summarizing, the advantage of the OCTAVE_INT_USE_LONG_DOUBLE appears to be
only for 8-bit multiplies on my system. Otherwise it is an even comparison.
I've created a patch that removes the OCTAVE_INT_USE_LONG_DOUBLE scenario in
case you feel the benefit of double-float doesn't outweigh the complexity and
CPU requirements on ALU resultant bit width. The current integer-base routine
looks near as fast as can be done. Plus, maybe a bit of tweaking can allow
the 8-bit version of the integer routine to be optimized for the ALU. (There
is always the SSE approach, too, for those really in need of efficiency.)
(file #44919, file #44920)
_______________________________________________________
Additional Item Attachment:
File name: simpler_int_mult-djs2018aug31.diff Size:2 KB
File name: octave-remove_int_use_long_double-djs2018aug31.patch Size:22 KB
_______________________________________________________
Reply to this item at:
<http://savannah.gnu.org/bugs/?54572>
_______________________________________________
Message sent via Savannah
https://savannah.gnu.org/
- [Octave-bug-tracker] [bug #54572] int64 does not saturate correctly in negative direction, (continued)
- [Octave-bug-tracker] [bug #54572] int64 does not saturate correctly in negative direction, John W. Eaton, 2018/08/29
- [Octave-bug-tracker] [bug #54572] int64 does not saturate correctly in negative direction, John W. Eaton, 2018/08/29
- [Octave-bug-tracker] [bug #54572] int64 does not saturate correctly in negative direction, Dan Sebald, 2018/08/29
- [Octave-bug-tracker] [bug #54572] int64 does not saturate correctly in negative direction, Dan Sebald, 2018/08/29
- [Octave-bug-tracker] [bug #54572] int64 does not saturate correctly in negative direction, John W. Eaton, 2018/08/29
- [Octave-bug-tracker] [bug #54572] int64 does not saturate correctly in negative direction, Dan Sebald, 2018/08/29
- [Octave-bug-tracker] [bug #54572] int64 does not saturate correctly in negative direction, Dan Sebald, 2018/08/30
- [Octave-bug-tracker] [bug #54572] int64 does not saturate correctly in negative direction, Dan Sebald, 2018/08/30
- [Octave-bug-tracker] [bug #54572] int64 does not saturate correctly in negative direction, Dan Sebald, 2018/08/30
- [Octave-bug-tracker] [bug #54572] int64 does not saturate correctly in negative direction, John W. Eaton, 2018/08/30
- [Octave-bug-tracker] [bug #54572] int64 does not saturate correctly in negative direction,
Dan Sebald <=
- [Octave-bug-tracker] [bug #54572] int64 does not saturate correctly in negative direction, Dan Sebald, 2018/08/31