[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Help-glpk] Optimization and Multicore GLPK

From: Robbie Morrison
Subject: Re: [Help-glpk] Optimization and Multicore GLPK
Date: Wed, 2 Jan 2013 10:25:01 +1300
User-agent: SquirrelMail/1.4.22

Hi Reg, Harley, all

To:           glpk <address@hidden>
Subject:      Re: [Help-glpk] Optimization and Multicore GLPK
From:         Reginald Beardsley <address@hidden>
Date:         Sat, 29 Dec 2012 07:27:47 -0800 (PST)

> I'd suggest just "Performance Profiling and
> Optimization" as a title for the wiki chapter.
> Profiling is needed just for intelligently setting
> flags with modern compilers and constantly
> shifting machine architectures.

Thanks.  I will go with "Performance profiling and
code optimization" to make it a little clearer
that optimization takes on its computer science


> All the papers address a point in time when
> shared memory multiprocessors were being
> eclipsed by non-uniform access memory (NUMA)
> distributed machines.  For many reasons, most
> performance gains must be with NUMA machines. I
> suggest "Computer Architecture: A Quantitative
> Approach" by Hennessy and Patterson to any who
> are interested with the caveat that you'll need
> to know or learn a lot about hardware to
> benefit.  I plan to order the 5th edition.  I
> read the first 3 when they came out, but missed
> the 4th.  It's a constantly moving target.

Here are the full references:

  Hennessy, John L and David A Patterson.  2011.
      Computer architecture : a quantitative
      approach -- Fifth edition.  Morgan Kaufmann,
      San Francisco, California, USA.  ISBN
      978-0-12-383872-8 (pbk).

  Hennessy, John L and David A Patterson.  2007.
      Computer architecture : a quantitative
      approach -- Fourth edition.  Morgan
      Kaufmann, San Francisco, California, USA.
      ISBN 978-0-12-370490-0 (pbk).

The fourth edition is freely available as a PDF:

> Seismic processing codes are littered with
> coding artifacts from being ported to APs.  It
> can get really difficult to read the code.  So
> I'd leave GPUs to those interested in algorithm
> research.  Fun stuff, but very difficult and
> demanding.

Personally I cannot see the point in crafting the
codebase to utilize specialist hardware (like
GPUs) in the case of a general purpose project
like GLPK.

> Running multiple job queues as in the example I
> wrote for the wikibook is the best I can do to
> exploit multiple cores in my work.

I agree that many projects these days run multiple
scenarios or undertake parameter scans and other
forms of sensitivity testing.  So running each in
its own process concurrently has an appeal.

GNU parallel was mentioned earlier and has the
advantage that the console reporting is kept
apart, buffered, and printed when feasible in the
order of commencement (not completion).

To:           address@hidden
Subject:      Re: [Help-glpk] Optimization and Multicore GLPK
Message-ID:  <address@hidden>
From:         Harley <address@hidden>
Date:         Tue, 01 Jan 2013 12:07:07 +1100

> For those questioning the need for the requirement
> of re-entrant code for GLPK, I asked the same
> question and Xypron sent this answer to me by
> email:

I am going to suggest that the thread safety fix
(assuming that this does spark into life) is
accorded its own workflow and wikibook page.  My
current plan is to call it "The thread-safety

It may simply end up being a short chapter in the
GLPK manual and a page on the wikibook explaining
why GLPK is not thread-safe, why it does not need
to be thread-safe, and how users can adjust their
usage to accommodate this.

But remember people do ask about thread-safety
from time-to-time.  Maybe we should quiz them a
little more about their requirements.

It might be useful to dwell for a moment on what
GLPK might look like in five years time.  New
branching algorithms, for instance.


About to start on some wiki markup ..

best wishes, Robbie
Robbie Morrison
PhD student -- policy-oriented energy system simulation
Technical University of Berlin (TU-Berlin), Germany
University email (redirected) : address@hidden
Webmail (preferred)           : address@hidden
[from Webmail client]

reply via email to

[Prev in Thread] Current Thread [Next in Thread]