[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

histogram algorithms benchmark

From: Lars Winterfeld
Subject: histogram algorithms benchmark
Date: Mon, 30 Nov 2015 23:41:04 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0


two weeks ago I submitted code for the new histogram() and histcounts()
functions: https://savannah.gnu.org/patch/index.php?8801

Now, I found some time to benchmark the code I wrote and would like to
share the results. They can be reproduced by the script in the
attachment. I'd appreciate if others could run the script as well (takes
few minutes) and send me the outputted hist_bench_*.dat files, together
with some info about their setup (CPU, octave version, etc.).

There are now 4 possible algorithms (N data points, M bins):
(a) Use lookup() to binary search the indexes of the data points
(relative to the bin edges) and then run accumarray() to get the
histogram. It is used in histc() if M>2 (i.e. almost always). I use it
in histcounts() if the bins are not uniform in width.  O(N*log(M) + M)

(b) If the bins are uniform, just calculate the index based on floor ((x
- min) / binwidth). This one is new in histcounts().  O(N + M)

(c) Append "cutoff" makers, sort the data, get the histogram from the
diff() of position of the markers in the sorted sequence. This is used
in hist() if N >= 30.  O(N*log(N) + M)

(d) Loop over the edges and count how many data points are <= the edge.
Use diff() to get the histogram. This is used in hist() if N < 30 and in
histc() if M<=2.  O(N*M)

Results for my setup (i7, 8GB RAM, linux 3.19, 64bit, octave 4.1.0+):
Algorithm (b) is always as fast or faster than (a). However, (b) can
only be used if the bins are uniform. (a) depends only moderately on M.

Algorithm (d) is faster than (a) and (b) if M < 10, but can be orders of
magnitude slower if M gets big.

Algorithm (c) is a faster than (a) and (b) if N < 600. In that region,
the absolute time is quite small, e.g. (c) takes 0.1 ms and (a) and (b)
take about 0.15 ms.

I think that it is not worth to maintain a separate code path for 0.05
milliseconds runtime - so I'd forget about (c) altogether. Only (d)
might be worth to include - however, it only makes a noticeable
difference if the user has about N > 10^7 data points and chooses less
than 10 bins at the same time (which is quite uncommon I'd say).

So all in all, I'm quite okay with the algorithms I chose in histcounts().

I'm happy to discuss about this if you have questions or comments. It
would be interesting to see benchmark results on other setups (possibly
with jit enabled?, etc.).

Best wishes,

Attachment: hist_bench.m
Description: Text Data

reply via email to

[Prev in Thread] Current Thread [Next in Thread]