[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Help-glpk] Model Too Large

From: Mark Gritter
Subject: [Help-glpk] Model Too Large
Date: Tue, 3 Apr 2007 19:13:13 -0500

I have been using glpsol successfully to solve some game theory
problems.  The models I make optimize a player's strategy mix against
the maximal choices by the opponent.

I tried to extend this approach to a larger problem ("3-card push or
fold lowball").  The core of the model is this constraint:

# Define the maximums from B's perspective
# What this says is: B will have a pure strategy that is best against
# our mixed strategy, for each hand 'k' he draws.
s.t. MaxEV {(ka,kb,kc) in Hands, l in PlayerBStrategies} :
    m[ka,kb,kc] >= sum{ (ia,ib,ic) in Hands, j in PlayerAStrategies }
         ( p[ia,ib,ic,j] * weighted_ev_for_b[ia,ib,ic,ka,kb,kc,j,l] );

There are 455 elements in Hands, 257 PlayerAStrategies, and 5 PlayerBStrategies.

Unfortunately the matrix this generates is fairly dense (I think).  A
dense matrix representation should take about 2.4 GB.  But loading
this model into glpsol uses more than 100GB of memory--- it did not
finish loading before exhausting all the swap I had allocated on my
(64-bit) system.

Is there any way I can reduce the memory usage?  Any way I can rewrite
the above constraint to be more tractable?  (Other than just reducing
the size of the problem.)

If not, can anybody recommend an alternate solver better suited to
dense matrices?

Is there some way I can get the LP problem output incrementally?  I
tried using glpsol's option --wfreemps, but ran into the same memory
exhaustion.  How feasible is it to reuse the MathProg model parser,
but write a C/C++ backend that spits out just a portion of the problem
at a time, rather than keeping it all in memory?


Mark Gritter

reply via email to

[Prev in Thread] Current Thread [Next in Thread]