help-gsl
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Help-gsl] Just getting started - I have some 'newbie' questions.


From: Ted Byers
Subject: [Help-gsl] Just getting started - I have some 'newbie' questions.
Date: Thu, 22 Mar 2012 00:07:52 -0400

The following program works like a charm: better than expected.  I will add
questions within the code.

 

#include <iostream>

#include <vector>

#include <algorithm>

#include <iterator>

 

 

#include <boost/random/mersenne_twister.hpp>

#include <boost/random/normal_distribution.hpp>

 

#include <gsl/gsl_linalg.h>

 

// Nothing spectacular here - just doing what I usually do when setting up a
stochastic simulation

int main(int argc, char* argv[]) {

  boost::random::mt19937 gen;

  boost::normal_distribution<> ndf1(0.0, 1.0);

  boost::normal_distribution<> ndf2(0.0, 1.0);

  boost::normal_distribution<> nde1(0.0, 0.250);

  boost::normal_distribution<> nde2(0.0, 0.250);

  boost::normal_distribution<> nde3(0.0, 0.250);

 

  std::vector<double> x,y,z;

  double e1(0),e2(0),e3(0),f1,f2;

  int i = 0; for (; i < 100; ++i) {

    e1 = nde1(gen) - (0.1 * e1);

    e2 = nde2(gen) - (0.1 * e2);

    e3 = nde3(gen) - (0.1 * e3);

    f1 = ndf1(gen);

    f2 = ndf2(gen);

    x.push_back((f1 + f2 + e1));

    y.push_back((f1 + (0.5 * f2) + e2));

    z.push_back(((0.5 * f1) + f2 + e3));

    //    std::cout << d << "\t\t" << x << std::endl;

  }

  for ( unsigned int j = 0 ; j < x.size() ; j++) {

    std::cout << j << "\t" << x[j] << "\t" << y[j] << "\t" << z[j] <<
std::endl;

  }

// I figured the following out from the examples, and got better results
than I expected.

  unsigned int M = x.size();

  unsigned int N = 2;

  gsl_matrix * U = gsl_matrix_alloc(M,N);

  gsl_matrix * V = gsl_matrix_alloc(N,N);

  gsl_vector * Sv = gsl_vector_alloc(N);

  gsl_vector * XX = gsl_vector_alloc(M);

  gsl_vector * B = gsl_vector_alloc(N);

// I find the following distastefully wasteful.  I would have preferred to
have the GSL functions work directly on std::vectors instead

// of having to allocate new memory and bear the cost of the loop and
copying all the data.  Indeed, I'd have hoped for expression

// templates, to make these functions all the more efficient.  I realize
that is it likely too late to refactor GSL to make good use of 

// the STL (which I find as priceless as the boost collection of libraries.
Is there a way to make them all play nicely together without

// facing inefficiencies in memory management and copying?

  for ( unsigned int j = 0 ; j < y.size() ; j++) {

    gsl_matrix_set (U, j, 0, y[j]);

    gsl_matrix_set (U, j, 1, z[j]);

    gsl_vector_set (XX, j, x[j]);

  }

  gsl_linalg_SV_decomp_jacobi(U,V,Sv);

// it is my understanding I can turn this into a ridge regression by
modifying the elements of Sv

// Is that correct?  If so, what is the function of lambda and the elements
of Sv that ought to be applied.

  std::cout << std::endl << std::endl << "Sv = " << gsl_vector_get(Sv,0) <<
"\t" <<  gsl_vector_get(Sv,1) << std::endl;

  std::cout << std::endl << std::endl << "V = " << std::endl;

  std::cout << "\t" << gsl_matrix_get(V,0,0) << "\t" <<
gsl_matrix_get(V,0,1) << std::endl;

  std::cout << "\t" << gsl_matrix_get(V,1,0) << "\t" <<
gsl_matrix_get(V,1,1) << std::endl;

  gsl_linalg_SV_solve(U,V,Sv,XX,B);

  //  std::cout << "Beta = " << B << std::endl;

  //  std::cout << "V = " << V << std::endl;

  std::cout << std::endl << std::endl << "Beta = " << gsl_vector_get(B,0) <<
"\t" <<  gsl_vector_get(B,1) << std::endl;

  gsl_matrix_free(U);

  gsl_matrix_free(V);

  gsl_vector_free(Sv);

  gsl_vector_free(XX);

  gsl_vector_free(B);

  return 0;

}

 

I assume from your allocate and free family of functions that I can't create
the vectors and matrices using operator new, and free them using operator
delete, and thus give management of them to a smart pointer.  How, then, do
you manage to maintain exception safety?

 

That's all the questions I have for now.  I am sure I'll have more later.

 

Cheers

 

Ted

 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]