[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Help-gsl] questions about gsl_multimin_f*_minimizers (efficiency, drawb
From: |
Marc Baaden |
Subject: |
[Help-gsl] questions about gsl_multimin_f*_minimizers (efficiency, drawbacks) |
Date: |
Sat, 08 Nov 2003 11:40:10 +0100 |
Hi,
I have some questions wrt the efficiency of the routines in gsl_multimin.
I have replaced a routine in an existing Fortran code (originally using
a Quasi-Newton minimizer (Harwell VA13A)) by a call to the gsl_multimin
routines, with the choice of either conjugate_pr, conjugate_fr,
steepest_descent, vector_bfgs or nm_simplex.
The original Harwell VA13A algorithm should be quite similar to vector_bfgs.
So I was rather surprised that there is quite a noticeable "performance"
difference, the Harwell code being roughly a factor of 1.5-2 faster.
(NB: I should say here, that I use the minimizer in an interactive procedure,
where you can follow the actual minimization on the screen. So I am not
giving quantitative timings but qualitative observations).
There is no big difference with the other fdf minimizers. NM_simplex is not
adapted for the problem.
I can see different reasons for this "slow" performance, and wonder whether
there is a workaround or trick to apply or some minimizer parameters to tweak.
- In the Harwell code, one can give a step size for each of the variables
to be minimized, whereas there is an overall step size in GSL.
That might make a difference eg for bfgs. Can that be implemented ?
Anybody got experience with this ?
- Concerning the function that I minimize, it is very difficult / inefficient
to separate function evaluation and derivative calculation. So at every call
to f, df or fdf both are evaluated. The function evaluation (which is
necessary for the
gradient calculation) is the bottleneck of the program (gprof: 80 % !!).
So if the number of times this routine is called could be minimized, then
this would probably give a significant speedup.
My impression is that depending on the GSL method, the function is evaluated
2-3 times more often than with the original Harwell code.
My guess is that because with Harwell, we calculated the Hessian once at
the beginning, and then update very rarely, it is efficient and needs less
function evaluations, whereas I maybe the GSL
routines re-calculate the Hessian more often. Can this be modified (eg
give a frequency for the calculation of the second derivatives) ?
- Another (general) problem in my case is that the parameters of the
parametric function vary over time (as I said the minimization is
interactive and the user can influence the function by changing its
parameters). This seems to be an intrinsic problem when using an
advanced fdf minimizer (other than steepest descent).
Are there other minimization methods (or workarounds) better adapted for
this case ?
Thanks in advance,
Marc Baaden
--
Dr. Marc Baaden - Institut de Biologie Physico-Chimique, Paris
mailto:address@hidden - http://www.marc-baaden.de
FAX: +49 697912 39550 - Tel: +33 15841 5176 ou +33 609 843217
- [Help-gsl] questions about gsl_multimin_f*_minimizers (efficiency, drawbacks),
Marc Baaden <=