octave-bug-tracker
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Octave-bug-tracker] [bug #46830] Multiplication about 4x slower than Ma


From: Nicholas Jankowski
Subject: [Octave-bug-tracker] [bug #46830] Multiplication about 4x slower than Matlab
Date: Tue, 6 Sep 2022 21:25:20 -0400 (EDT)

Follow-up Comment #10, bug #46830 (project octave):

looking at the same comparisons in comment #7, on a Windows 10 machine with a
modest Intel Core i5-6440HQ  (4 cores):

Matlab R2022a:


a=rand(6000);
tic; a * a; toc
Elapsed time is 4.977654 seconds.

tic; a .* a; toc
Elapsed time is 0.112174 seconds.


just watching task mgr, a*a clearly used all 4 cores. couldn't tell with a.*a.
 I created b=rand(20000) array that took much longer and b.*b took ~20sec and
clearly used all 4 cores.  cpu also pegged to ~100% utilization during both
tasks.

Octave 7.2.0: 

a = rand(6000);
tic; a*a; toc
Elapsed time is 1373.01 seconds.
tic; a .* a; toc
Elapsed time is 0.127481 seconds.


i don't know what it's doing under the hood or how to get that detail with
vanilla windows, but with a*a all four cores saw some definite CPU load
increase. Octave's CPU usage it hovered around 30-40% total CPU utilization
even while system total was only 60-70%. never spiked any cpu to 100% unlike
matlab which drove all CPUs to 100%.  memory use would ratchet up 250-300MB,
similar to matlab memory use jumping up from ~1 to 1.4GB. no gpu on this
machine.

so... don't know what might have happened in the past 6 years to go from 2x to
300x slower, but that definitely leaves a lot of opportunity for potential
improvement. Ran each a few times to make sure something else wasn't driving a
cpu limit, but got similar results.



    _______________________________________________________

Reply to this item at:

  <https://savannah.gnu.org/bugs/?46830>

_______________________________________________
Message sent via Savannah
https://savannah.gnu.org/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]