[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: MATLAB / Scilab / Octave benchmarks

From: roland65
Subject: Re: MATLAB / Scilab / Octave benchmarks
Date: Mon, 25 Jul 2016 01:01:20 -0700 (PDT)

Thanks for your comments. Here are my answers:

1. Table 4.8 shows in green the best run times and in red the worst

2. You're right, the cost functions can be vectorized further more. However,
the code you provide is for a vector argument x, while the PSO and DE code
work on a matrix argument. Here below are the new vectorized codes:

function y=rosenbrock(x)
x2 = x.^2;
xx = circshift(x,n-1);
z = 100*(xx - x2).^2 + (1 - x).^2;
y = sum(z,1) - z(end,:);

function y = rastrigin(x)
z = x.^2 - 10*cos(2*pi*x);
y = 10*n + sum(z,1);

Using these codes, the PSO run time is divided by two for Octave (no change
for DE).
MATLAB and Scilab run times are not changed.

3. As said in the text: "[The pincon benchmark set] consists of 31 different
programs that target various operations: loops, random generations,
recursive calls, quick sorts, extractions and insertions, matrix operations,
permutations, comparisons, cell reads and writes. This benchmark set does
not make an intensive use of the BLAS routines because its aim is mainly to
test the speed of the interpreter and of the general matrix operations."

So the fftx program of the pincon set is indeed a recursive implementation
of the FFT and it's of course much slower than the FFTW test (done in the
ncrunch benchmark set).

Perhaps I should add more explanation about this in the text...

View this message in context:
Sent from the Octave - General mailing list archive at

reply via email to

[Prev in Thread] Current Thread [Next in Thread]