[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Help-glpk] Non-sparse problems and other questions

From: Erik de Castro Lopo
Subject: [Help-glpk] Non-sparse problems and other questions
Date: Fri, 31 May 2002 15:40:56 +1000


Question 1

I have written a program which uses GLPK to solve a particular class of 
problems (FIR filter design). On large filters, the problem is specified by 
as many as N columns and 3*N rows where N can be as much as 1000. The problem 
is also completely non-sparse as every row and every column has an entry. 
On problems of this size I often find I run out of memory on a machine with 
768Meg of RAM. 

A rough back-of-the-envelope calculation suggests that if these LP
matrices were stored as arrays of doubles, the memory used would be

        1000 * 3000 * sizeof (double) ~= 24 Meg

Since I'm running out of memory, my guess (without looking at the source 
code) is that you are using some sort of sparse matrix data storage.

Would you ever consider implementing a solver which handles large 
non-sparse problems with lesser memory usage? This would allow me to
tackle even larger problems (my goal) without buying more RAM :-).
It should also lead to quicker solutions.

Question 2

On these large problems (using glp_call_rsm1) I find that even finding an 
initial feasible solution can take a lot of number crunching and considerable
time. I do however have a very computationally cheap method of obtaining a
near-feasible solution. 

What I would like to do is is find a way to put this near-feasible solution
into GLPK and make it search for the optimal solution from there. I have
asked this question before and you gave a detailed response. Unfortunately
I understood very little of it mainly because I am not at all familiar with
the terminology you are using and my memories of learning LP are not too

However, as you suggested, I switched to using the glp_simplex2() solver
and I am doing the following:

  0) Set up objective function.
  1) Set up U, L and S constraints
  2) For each column do:

      glp_put_col_soln (problem, column, 'B', guess [column], 0.0) ;

  3) Tell the solver to do a warm start:

      glp_init_spx2 (&parm) ;
      parm.initb = 2 ;

  4) Run the solver:

      retval = glp_simplex2 (problem, &parm);

This however fails with the following error:

    gm_scaling: max / min = 1.476e+18
    gm_scaling: max / min = 1.476e+18
    glp_simplex2: k = 1372; invalid basis information

If I remove step 2) above, the simplex2 solver does indeed find the optimal 
solution. What am I doing wrong? Why can't make the solver start from my 
near-optimal solution, the guess [column] values?

When I moved from the glp_call_rsm1 solver to the glp_simplex2 solver I
started getting warning messages of "Numerical instability". Is this anything
to be comcerned about?

Thanks in advance,
  Erik de Castro Lopo  address@hidden (Yes it's valid)
Moore's Law: hardware speed doubles every 18 months
Gates' Law: software speed halves every 18 months 

reply via email to

[Prev in Thread] Current Thread [Next in Thread]