[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Nonlinear fitting with lg(a+x)

From: Jaroslav Hajek
Subject: Re: Nonlinear fitting with lg(a+x)
Date: Mon, 16 Mar 2009 13:28:26 +0100

On Mon, Mar 16, 2009 at 11:21 AM, reposepuppy <address@hidden> wrote:

> Thanks for the suggestions, but I guess I can't wait for the 3.2 version and
> I can't even understand the codes you've written(because I'm really a newbie
> in Octave...), while the function produced from fitting is
> flexible(polynomial ones are allowed). So I made a polynomial one instead.
> Still, I want to find out what the codes represents, and I can't find what
> "@par" and the codes below mean:
> par0 = [... initial guesses for r0, a, b]
> par = fsolve (model_error, par0, ...options)
> Could you make a brief explanation for me?

It's simple - fsolve can't work with the parameters as individual
variables, but needs them packed into a vector, because it uses linear

The statement
model_error = @(par) r - model (par(1), par(2), par(3), c);

defines a new function with a single parameter, par, that is expected
to be a vector of [r0, a, b] and calculates teh residual vector by
calling the model function with the parameters par(1), par(2), par(3)
and independent variables c and subtracts the dependent observations r
to get a vector of the residuals.

fsolve then attempts to find a vector par_opt such that
norm(model_error(par_opt)) is as small as possible, which is what you

par0 = [... initial guesses for r0, a, b]

means that you need to supply initial guesses for the parameters. If
you have absolutely no idea, just use zeros, but judging by your post
this was not the case. The better the initial guess, the better
results, obviously.

Another code for nonlinear least squares is leasqr in the optimization package.

You can also use a general nonlinear minimization function, as
suggested by M. Creel. However, that usually means not exploiting the
special structure of the minimization problem (sum of squares).
The general rule is that Newton-like solvers (fsolve and leasqr) are
usually better when the expected residual is small (good fit) and
general minimization solvers (like bfgsmin) usually do equally good or
better job when the residual is large (bad fit).


RNDr. Jaroslav Hajek
computing expert & GNU Octave developer
Aeronautical Research and Test Institute (VZLU)
Prague, Czech Republic

reply via email to

[Prev in Thread] Current Thread [Next in Thread]