[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Help with slow model translation

From: Heinrich Schuchardt
Subject: Re: Help with slow model translation
Date: Fri, 21 Feb 2020 23:49:19 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.5.0

On 2/21/20 6:26 PM, Heinrich Schuchardt wrote:
On 2/21/20 5:46 PM, Greg Gruber wrote:

I have a model that I have developed using MathProg, that I am using
regularly. Some problem datasets result in very large models. The
problem itself is solving very quickly, but I am finding the model
translation (reading the .mod file, data files, and setting up the
problem) to be very slow.

I came across this on the wikipedia entry: " The take-home message is
that nested set iterations should be avoided where possible, as these
greatly expand the dimensionality and size of the model space to be
processed.", and it discusses a case on the OSeMOSYS energy model where
the execution time was reduced from 15 hours to 9 minutes by a

Unfortunately, I'm not sure what they mean here, and I am looking for an
example. I have tried searching the OSeMOSYS site for more information,
as well as searching for publications by Jonas Horsch, who did the
reformulation, but have had no luck yet.

Does anybody have examples illustrating how to avoid nested-set
iterations in MathProg?

Thanks in advance,

Hello Greg,

please, provide an example model which shall be simplified.

Best regards


Thanks for the example model.

Using the GMPL language is not very efficient to formulate the complex
constraints you have.

For creating the problem formulation for very large problems I would
always prefer using a programming language like Java to generate the
model formulation and directly call the solver library from the
programming language.

Your problem has more than 30 million variables. I doubt that you will
get a solution for this size of a problem with GLPK.

Best regards


reply via email to

[Prev in Thread] Current Thread [Next in Thread]