[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Help-gsl] gsl performance
From: |
onefire |
Subject: |
Re: [Help-gsl] gsl performance |
Date: |
Mon, 7 Oct 2013 13:22:23 -0400 |
If there is a chance of a patch getting accepted I will certainly submit it
if I have one.
I took a look at the code in the multimin directory last night (the last
problem affects other parts of the library but my example was about the
multidimensional minimizer, so I figured I would start there).
I might be wrong, but the way I see it, gsl has two design characteristics
that by themselves do not cause much harm but, if combined, make it very
hard to use stack arrays internally.
One, which I like, is that the routines give the user low-level control
over their progress in the sense that you can create an object, manually
iterate and observe their progress. I prefer this over having a single
function that does the work until convergence (but see below).
The other thing that seems to be common in the library is that it seems to
never let the user handle the allocations. For example if I wanted to do
this:
gsl_multimin_fminimizer s;
gsl_multimin_fminimizer_type T;
gsl_multimin_fminimizer_type_init(T);
gsl_multimin_fminimizer_init(s, T, n);
I just could not do it because the library does not provide such init
functions.
The above has the advantage that it allows the caller to have objects in
the stack. If ones wants arrays that are on the stack, this is necessary
because:
1) Since the work is done across multiple function calls, the internal
arrays need to live until the minimizer object dies, so the arrays need to
be automatic variables inside the object.
2) If the internal arrays are automatic objects, they are allocated on the
stack only if the object itself is allocated on the stack, so one really
needs to have the option to allocate on the stack.
The way I see it, there are two options to get around this, none of them
having to break the API but rather extend it:
1) Provide functions that do all the work until convergence (or a
user-specified maximum number of iterations) without creating an object.
This gives the user less control, but they could always use the standard
methods to control the iterations. This is what I do in my library, and by
itself it provides a (small) performance improvement because you can skip
the creation of objects. But most importantly for the present discussion,
it allows the internal arrays to be automatic variables inside such
function.
2) Provide additional init and finalize functions (or whatever we want to
call them) to let the user handle the allocations (if she wants to) herself.
I am not sure about which option I prefer.
Gilberto
On Sun, Oct 6, 2013 at 9:40 PM, Rhys Ulerich <address@hidden> wrote:
> > Rhys, I did try to use views. They do not help because the gsl routines
> > allocate vectors internally and there is not much that I can do about
> it...
> > except for maybe hacking gsl and changing gsl_vector_alloc myself.
>
> If from hacking around you happen to restructure the API so that a
> clean workspace can be allocated for a given problem size and then
> passed in to avoid the problematic internal allocations, please pass
> along a patch. There's a lot of precedent in the library for having
> low-level compute kernels wrapped by convenience methods, and
> refactoring a kernel to match that pattern would be most welcome
> provided the existing APIs remain unbroken.
>
> - Rhys
>
- [Help-gsl] gsl performance, onefire, 2013/10/06
- Re: [Help-gsl] gsl performance, Simon Zehnder, 2013/10/06
- Re: [Help-gsl] gsl performance, onefire, 2013/10/06
- Re: [Help-gsl] gsl performance, Simon Zehnder, 2013/10/06
- Re: [Help-gsl] gsl performance, onefire, 2013/10/06
- Re: [Help-gsl] gsl performance, Rhys Ulerich, 2013/10/06
- Re: [Help-gsl] gsl performance,
onefire <=
- Re: [Help-gsl] gsl performance, Sam Mason, 2013/10/07
- Re: [Help-gsl] gsl performance, onefire, 2013/10/07
- Re: [Help-gsl] gsl performance, Sam Mason, 2013/10/08
- Re: [Help-gsl] gsl performance, onefire, 2013/10/08