help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: RBF Toolbox?


From: Mike B.
Subject: Re: RBF Toolbox?
Date: Thu, 22 Apr 2010 22:14:48 -0700 (PDT)

Thanks for the info Jaroslav.

I may have a look into the GPR package, though the DACE toolbox works fine 
under Octave. I am mainly interested in an RBF toolbox now.

Cheers,
Mike.


--- On Fri, 23/4/10, Jaroslav Hajek <address@hidden> wrote:

> From: Jaroslav Hajek <address@hidden>
> Subject: Re: RBF Toolbox?
> To: address@hidden
> Cc: "Jordi Gutiérrez Hermoso" <address@hidden>, "Octave mai. lis." 
> <address@hidden>
> Date: Friday, 23 April, 2010, 3:05 PM
> 2010/4/23 Mike B. <address@hidden>:
> > Hi Jordi,
> >
> > Thanks for the prompt reply.
> > I am interested in interpolation from scattered data,
> dimension (number of components per vector) can be as high
> as 200. Ideally, the toolbox would allow to select from
> several types (multiquadric, linear, Gaussian, inverse
> multiquadric etc.) and would calibrate any hyper-parameters
> (such as the free coefficient in MQ and IMQ), say, by
> cross-validation. The DACE toolbox calibrates the Kriging
> hyper-parameters by maximum likelihood.
> >
> > Cheers,
> > Mike.
> >
> >
> 
> You may want to try the OctGPR package for Gaussian
> Process
> Regression, which essentially is just another name for
> Kriging.
> It supports several covariance models (gaussian,
> exponential, imq,
> matern-3 and matern-5), and can callibrate hyper-parameters
> by ML. The
> hyperparameters consist of a scale factor for each
> dimension and a
> single noise level. Having a separate scale factor for each
> dimension
> typically provides the best results, but may be too costly
> if there
> are many dimensions (I never used it with more than 10).
> There is currently no built-in way to employ fewer
> hyperparams in the
> training, except that you can use the function in
> no-training mode (it
> just calculates the log likelihood) and then employ it in
> any custom
> training procedure you like. But the built-in one is faster
> (takes
> care of reusing data in memory and calculates the ML
> derivatives very
> efficiently).
> 
> A stupid&simple demo is here (some info may be
> outdated), and is also
> part of the script:
> http://artax.karlin.mff.cuni.cz/~hajej2am/octgpr.php
> 
> besides the full GPR, which is usable up to several
> thousands of
> scattered points (requires full matrix factorizations),
> there is also
> a projected approach, where the data is projected onto
> fewer centers
> to reduce the rank of covariance matrix and hence reduces
> the cost
> from O(N^3) to O(NM^2), where M is the number of centers. I
> should
> warn you, however, that this part hasn't yet received much
> testing.
> 
> hth
> 
> 
> -- 
> RNDr. Jaroslav Hajek, PhD
> computing expert & GNU Octave developer
> Aeronautical Research and Test Institute (VZLU)
> Prague, Czech Republic
> url: www.highegg.matfyz.cz
> 





reply via email to

[Prev in Thread] Current Thread [Next in Thread]