help-glpk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Help-glpk] Degeneracy cycling


From: Nick Downing
Subject: [Help-glpk] Degeneracy cycling
Date: Tue, 29 Sep 2009 10:57:51 +0400

hi everyone,

I #39;ve been using the solver for a month or two now and there are many things 
I really like about it, in particular the cleanliness of the code, it is very 
easy to see what is going on and make experimental changes.  The main reason I 
#39;m writing is to ask if there is any built-in provision for avoiding 
degeneracy cycling and what people #39;s plans/thoughts are on the issue.  I 
don #39;t mean the problem of losing feasibility, re-entering phase 1 with 
consequent worsening of the objective function and thereby repeating a basis, I 
#39;m talking about when you perform a number of consecutive degenerate pivots 
that don #39;t improve the objective and eventually return to the same basis.

The manual and the code don #39;t make any reference to this, so I assume there 
isn #39;t any anti-cycling, for a while I was thinking that the head[m+1..m+n] 
array of non-basic variables was maintaining an implicit ordering somewhat like 
Bland #39;s rule, but then I realized that there is no ordering on the leaving 
variables, see chuzr in glpspx01.c for example, in case of a tie between 
leaving variables it tries to choose the one with the largest coefficient in 
that column of the tableau.  (Note:  No tolerance is used when making the test 
for a tie, I suggest adding one could enable choosing a larger coefficient and 
thus a better conditioned matrix, does everyone agree?).  So I guess cycling 
can occur, has anybody tried entering one of the textbook examples to see 
whether it really happens?

So anyway, I tried implementing Bland #39;s rule, it was easy to do, I just 
check if the previous pivot was degenerate, and if so, use Bland #39;s pivot 
selection instead of the normal one.  Unfortunately this unreasonably hurt 
performance on the normal (non cycling) case, i.e. it led to lengthy stalls, as 
my problems contain a lot of degeneracy and the Bland pivot rule doesn #39;t 
attempt to choose a row with a good reduced cost for entry.  So I also tried 
the Cacioppi   Schrage rule described by Richard Kipp Martin (Large Scale 
Linear   Integer Optimization, p171-172, you can find this in Google Books).  I 
don #39;t know if I implemented it correctly though, because it #39;s supposed 
to address this problem by forcing very few restrictions on the 
entering/leaving variables, but I observed much stalling still.

Eventually what solved the problem for me was a perturbation of the right-hand 
sides, note that I hadn #39;t actually observed cycling in practice but I did 
observe lengthy stalls with the standard code, the perturbation fixed this and 
led to a dramatic improvement.  I did it by adding, to each lb[1..m] and 
ub[1..m] pair, a random number in the range -1e-5..1e-5.  I didn #39;t bother 
removing the perturbation afterwards, as the accuracy of the results was 
suitable for my purpose, but I would suggest that if doing e.g. primal simplex, 
then after reaching optimality we have a primal/dual feasible solution, and 
removing the perturbation amounts to changing the dual objective, so we wouldn 
#39;t lose dual feasibility and we could continue to optimize with dual simplex 
until the true optimum is reached.

Another way of doing the perturbation is to note that the solver doesn #39;t 
normally work with a right-hand side, e.g. an equation like x+y+z<=5 is changed 
to s-x-y-z=0 with s<=5, rather than x+y+z-s=5 with s>=0 as in the textbook way 
of doing things.  We could introduce a right-hand side, containing only the 
small perturbations, so e.g. the calculation of x_B (bbar) would be done as
  x_B = A_B^{-1} (b - A_N x_N)
where b is the perturbation, rather than
  x_B = -A_B^{-1} A_N x_N
as we are currently doing.  This kind of approach might be a bit more elegant 
and avoid having to save the lb/ub arrays for when the perturbation is removed.

Some other suggestions:  We maintain a copy of the row-representation of N (as 
suggested by Bixby) for the cbar and gamma recurrences, but the code to delete 
a column of N as it #39;s pivoted into the basis doesn #39;t look particularly 
efficient, I made a version where the N is constructed once at the start, 
really just a row-representation of the full coefficient matrix (I | -A), and 
extra columns are simply ignored when updating cbar and gamma.  I also made the 
cbar and gamma arrays a bit bigger so there is a space for every variable 
rather than only the non-basic variables, so that they correspond with the 
columns of (I | -A) and make the addressing a bit easier.  For the gamma array 
I put an inner-loop test to avoid updating the basic variables, but for cbar I 
didn #39;t bother with such a test.

I also think more diagnostic output would be good, I added a counter for how 
many degenerate pivots have occurred (next to the iteration counter), I #39;m 
also planning to output a count of how many nonzeros occur in the basis 
factorization, as I #39;m keen to know the effect of different pivoting 
strategies (including adding the tolerance in chuzr that I mentioned earlier) 
on the sparseness of the basis inverse.  Does anyone have a good reference for 
a devex-like pivoting strategy that attempts to keep the basis as sparse as 
possible?  Because I think this would be really helpful for my (very large) 
problems.  Another change I was thinking of making, is that when we construct 
the pivot row, we should apply a tolerance in the test for zero, improving the 
sparseness, but I haven #39;t tried this yet.

Another thing I did was to implement a GLP-native format for storing the LP, I 
did this rather reluctantly as the world doesn #39;t really need yet another 
competing format, but the problem with MPS is that it doesn #39;t save the 
minimization/maximization flag and with CPLEX LP format it doesn #39;t save the 
objective function coefficient.  Also, at least with CPLEX LP, the rows are 
loaded in a different order (they are entered as they are encountered in the 
file), which wasn #39;t acceptable for my purpose, in fact it was very 
misleading and I wasted a fair bit of time wondering why my post-solution 
analyzer wasn #39;t working properly.  The new format is very similar to the 
solution files already output, I did it by modifying glp_write_sol and 
glp_read_sol to create glp_write_glp and glp_read_glp.  I also added a new 
--basis option for glpsol, so that it can read (non-optimal) a solution file 
written by glp_write_sol and use it as the starting basis.

One last comment, the current arrangements aren #39;t very good for a caller 
that wishes to solve, make some changes in the model, resolve etc.  In 
particular I #39;m talking about the init_csa routine e.g. in glpspx01.c, which 
insists on making a copy of the entire problem every time it is entered, could 
we make this incremental and only merge in the changes to the problem to its 
private data structures?

Sorry, this turned into a bit of an essay, but what does everyone think?  
Should I upload some of my code?  It #39;s not particularly clean at the 
moment, but it would probably be a starting point.

cheers, Nick
 
hi everyone,

I've been using the solver for a month or two now and there are many things I really like about it, in particular the cleanliness of the code, it is very easy to see what is going on and make experimental changes.  The main reason I'm writing is to ask if there is any built-in provision for avoiding degeneracy cycling and what people's plans/thoughts are on the issue.  I don't mean the problem of losing feasibility, re-entering phase 1 with consequent worsening of the objective function and thereby repeating a basis, I'm talking about when you perform a number of consecutive degenerate pivots that don't improve the objective and eventually return to the same basis.

The manual and the code don't make any reference to this, so I assume there isn't any anti-cycling, for a while I was thinking that the head[m+1..m+n] array of non-basic variables was maintaining an implicit ordering somewhat like Bland's rule, but then I realized that there is no ordering on the leaving variables, see chuzr in glpspx01.c for example, in case of a tie between leaving variables it tries to choose the one with the largest coefficient in that column of the tableau.  (Note:  No tolerance is used when making the test for a tie, I suggest adding one could enable choosing a larger coefficient and thus a better conditioned matrix, does everyone agree?).  So I guess cycling can occur, has anybody tried entering one of the textbook examples to see whether it really happens?

So anyway, I tried implementing Bland's rule, it was easy to do, I just check if the previous pivot was degenerate, and if so, use Bland's pivot selection instead of the normal one.  Unfortunately this unreasonably hurt performance on the normal (non cycling) case, i.e. it led to lengthy stalls, as my problems contain a lot of degeneracy and the Bland pivot rule doesn't attempt to choose a row with a good reduced cost for entry.  So I also tried the Cacioppi & Schrage rule described by Richard Kipp Martin (Large Scale Linear & Integer Optimization, p171-172, you can find this in Google Books).  I don't know if I implemented it correctly though, because it's supposed to address this problem by forcing very few restrictions on the entering/leaving variables, but I observed much stalling still.

Eventually what solved the problem for me was a perturbation of the right-hand sides, note that I hadn't actually observed cycling in practice but I did observe lengthy stalls with the standard code, the perturbation fixed this and led to a dramatic improvement.  I did it by adding, to each lb[1..m] and ub[1..m] pair, a random number in the range -1e-5..1e-5.  I didn't bother removing the perturbation afterwards, as the accuracy of the results was suitable for my purpose, but I would suggest that if doing e.g. primal simplex, then after reaching optimality we have a primal/dual feasible solution, and removing the perturbation amounts to changing the dual objective, so we wouldn't lose dual feasibility and we could continue to optimize with dual simplex until the true optimum is reached.

Another way of doing the perturbation is to note that the solver doesn't normally work with a right-hand side, e.g. an equation like x+y+z<=5 is changed to s-x-y-z=0 with s<=5, rather than x+y+z-s=5 with s>=0 as in the textbook way of doing things.  We could introduce a right-hand side, containing only the small perturbations, so e.g. the calculation of x_B (bbar) would be done as
  x_B = A_B^{-1} (b - A_N x_N)
where b is the perturbation, rather than
  x_B = -A_B^{-1} A_N x_N
as we are currently doing.  This kind of approach might be a bit more elegant and avoid having to save the lb/ub arrays for when the perturbation is removed.

Some other suggestions:  We maintain a copy of the row-representation of N (as suggested by Bixby) for the cbar and gamma recurrences, but the code to delete a column of N as it's pivoted into the basis doesn't look particularly efficient, I made a version where the N is constructed once at the start, really just a row-representation of the full coefficient matrix (I | -A), and extra columns are simply ignored when updating cbar and gamma.  I also made the cbar and gamma arrays a bit bigger so there is a space for every variable rather than only the non-basic variables, so that they correspond with the columns of (I | -A) and make the addressing a bit easier.  For the gamma array I put an inner-loop test to avoid updating the basic variables, but for cbar I didn't bother with such a test.

I also think more diagnostic output would be good, I added a counter for how many degenerate pivots have occurred (next to the iteration counter), I'm also planning to output a count of how many nonzeros occur in the basis factorization, as I'm keen to know the effect of different pivoting strategies (including adding the tolerance in chuzr that I mentioned earlier) on the sparseness of the basis inverse.  Does anyone have a good reference for a devex-like pivoting strategy that attempts to keep the basis as sparse as possible?  Because I think this would be really helpful for my (very large) problems.  Another change I was thinking of making, is that when we construct the pivot row, we should apply a tolerance in the test for zero, improving the sparseness, but I haven't tried this yet.

Another thing I did was to implement a GLP-native format for storing the LP, I did this rather reluctantly as the world doesn't really need yet another competing format, but the problem with MPS is that it doesn't save the minimization/maximization flag and with CPLEX LP format it doesn't save the objective function coefficient.  Also, at least with CPLEX LP, the rows are loaded in a different order (they are entered as they are encountered in the file), which wasn't acceptable for my purpose, in fact it was very misleading and I wasted a fair bit of time wondering why my post-solution analyzer wasn't working properly.  The new format is very similar to the solution files already output, I did it by modifying glp_write_sol and glp_read_sol to create glp_write_glp and glp_read_glp.  I also added a new --basis option for glpsol, so that it can read (non-optimal) a solution file written by glp_write_sol and use it as the starting basis.

One last comment, the current arrangements aren't very good for a caller that wishes to solve, make some changes in the model, resolve etc.  In particular I'm talking about the init_csa routine e.g. in glpspx01.c, which insists on making a copy of the entire problem every time it is entered, could we make this incremental and only merge in the changes to the problem to its private data structures?

Sorry, this turned into a bit of an essay, but what does everyone think?  Should I upload some of my code?  It's not particularly clean at the moment, but it would probably be a starting point.

cheers, Nick

Attachment: Part2.txt
Description: Text document


reply via email to

[Prev in Thread] Current Thread [Next in Thread]