[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Help-gsl] Non-linear Multi-parameter Least-Squares Example
From: |
Gordan Bobic |
Subject: |
Re: [Help-gsl] Non-linear Multi-parameter Least-Squares Example |
Date: |
Wed, 18 Jul 2007 11:11:22 +0100 (BST) |
Thanks for that, really appreciate your help.
The problem I seem to have now is that the main fitting iteration loop
seems to just bail out on me immediately:
do
{
iter++;
status = gsl_multifit_fdfsolver_iterate (s);
printf ("status = %s\n", gsl_strerror (status));
print_state (iter, s);
if (status) break;
status = gsl_multifit_test_delta (s->dx, s->x,
1e-4, 1e-4);
}
while (status == GSL_CONTINUE && iter < 500);
"if (status)" always evaluates to true (status == -2, I checked) but the
error string is actually "the iteration has not converged yet". If I
change this to:
if (status != -2), then it just keeps going until the maximum number of
iterations are up. So, status == GSL_CONTINUE, but the worrying thing is
that at each pass the numbers output by print_state() are the same. It is
as if there is no gradient descent happening.
Am I missing something obvious? It doesn't seem to make any difference
what I put in x_init[], be it the output of the guesstimator, 1s 0s or
some other random number. It never seems to descend.
I removed all the gsl_rng* stuff because I have a fixed data block I use
for testing so I can make meaningful comparisons between different
algorithms and libraries (i.e. content of y[] is pre-generated). This
couldn't be the cause of the problem, could it?
My Jacobian setting is:
for (i = 0; i < n; i++)
{
// Yi = a * sin(X/b + c) + d
gsl_matrix_set (J, i, 0, sinf(b / i + c) / sigma[i]);
gsl_matrix_set (J, i, 1, a * cosf(b / i + c) / sigma[i] / i);
gsl_matrix_set (J, i, 2, a * cosf(b / i + c) / sigma[i]);
gsl_matrix_set (J, i, 3, 1 / sigma[i]);
}
But even if this is wrong, surely it should still descend _somewhere_,
given random starting points, should it not?
Gordan
On Wed, 18 Jul 2007, John Gehman wrote:
> G'day Gordan,
>
> The Jacobian matrix J is the differential of your objective equation
> with respect to each parameter (corresponding to the second index)
> at each data point (corresponding to the first index).
>
> The relevant equation is [a * sin(b / t + c) + d - yi ] / sigma[i],
> i.e. you're trying to minimize the difference between the model and
> the data (yi), where some points may be more reliable than others
> (given sigma[i]), by optimizing the amplitude, period, phase, and
> offset of your sine wave. The Jacobian provides the analytic
> equations to inform the system of the sensitivity of the residuals in
> the evolving fit to changes in each of the parameters.
>
> Taking the partial derivative of your equation with respect to each
> of the floating parameters, and setting them appropriately into the
> matrix J, assuming your vector of floating parameters elsewhere is
> ordered (a,b,c,d), and retaining your s = sigma[i], I get:
>
>
> gsl_matrix_set (J, i, 0, sin(b / t + c) / s); # derivative
> with respect to a
> gsl_matrix_set (J, i, 1, cos(b / t + c) * a/(t*s) ); # derivative
> with respect to b
> gsl_matrix_set (J, i, 2, cos(b / t + c) * a/s); # derivative
> with respect to c
> gsl_matrix_set (J, i, 3, 1/s); #
> derivative with respect to d
>
> The analytic derivatives are usually my problem, however, so please
> confirm them!