|Subject:||Re: [Help-glpk] Parallel runs of glpk|
|Date:||Wed, 14 Dec 2016 08:39:51 -0600|
For what it is worth, I and my team-mates have done your option A before and it has worked fine. The only slight complication is that we had to regulate the number of simultaneous processes. In our usage, we run glpsol.exe via a “system” call (it’s called different things in different languages). You can also decide the level of data abstraction – do you build MPS files that are quick to process but take longer to program, or do you build *.dat files and use MathProg?
In our application, we had not worried about memory, and perhaps you have to. We needed to have our driver application be 32-bit because it was connected to 32-bit Excel dll’s, but by running glpsol.exe via “system” calls we could call the 64-bit version of the solver.
From: Help-glpk [mailto:help-glpk-bounces+address@hidden
On Behalf Of Mathieu Dutour
I am interested in running glpk from a multithreaded program.
The goal is not to have GLPK itself parallel but instead to
have glpk used many times by different threads for solving
many different linear programs.
As is well known glpk is not thread safe and my question
is about alternative solution to that problem. Here are some
A) One is to have to run glpsol standalone by running it
as external program with the input file generated by the thread
and then read by the thread. But this solution has its costs
in terms of runtime. It is easy to program.
B) Have one thread that does only call glpk. It is adequate
in single threaded performance but potentially expensive
since some thread may wait. Relatively easy to program.
C) Use shared memory to exchange data. That is multiple
number of individual programs running glpk and getting their
data from shared memory.
Any other solution? Is there any implementation that you
This e-mail and any attachments may be confidential or legally privileged. If you received this message in error or are not the intended recipient, you should destroy the e-mail message and any attachments or copies, and you are prohibited from retaining, distributing, disclosing or using any information contained herein. Please inform us of the erroneous delivery by return e-mail. Thank you for your cooperation.
|[Prev in Thread]||Current Thread||[Next in Thread]|