[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Help-glpk] PC vs. WS

From: Brady Hunsaker
Subject: Re: [Help-glpk] PC vs. WS
Date: 25 Jan 2004 11:41:04 -0500

On Sat, 2004-01-24 at 18:54, C. Javier Sosa Gonzalez wrote:
> Hello,
> I am using the glpk v3.2.3. I solve several LPs with this version
> of the glpk over a PC and a WorkStation(WS). Given a formulation of each
> LP using the CPLEX (lpt) file format, the solution for this formulation is
> equal if I solve it over a PC or a WS.
> Today, I have used the new glpk v4.4. I have re-executed each
> problem in the PC and the WS. Now, with the v4.4, if I use a PC some 
> solutions are differents to the WS solutions in some problems.
> My answer is, that is right? I am using linear programming, that is, the
> simplex solver only. I am using the same version of the glpk over the PC
> and the WS. I suppose that the behaviour must be the same due to the
> program and the data are the same. What am I wrong?  On the other hand, I
> know that both different solutions are valid (they are optimal solutions),
> but I do not understand how the same software can have different behaviour
> for the same data.
> As Final notes, I test the attached lp problem in several PCs runing
> linux and all of them give to me the same solution (address@hidden,
> address@hidden and address@hidden). Also the WS are a Sun-Blade 150 and a 
> SunFire
> 280R.
> Thanks in advance,
> Javier Sosa
> ----

I do not find this behavior surprising.  A difference between the x86
machines (Intel and AMD chips) and the Sun machines is in the
floating-point unit.  In general, floating-point values are stored in
the variable type 'double', which has 64 bits.  

Most 'workstation' machines, like most Suns and alpha machines, use 64
bits (roughly speaking) in their floating-point unit as well, making
sure that at every step the answers are rounded correctly to 64 bits.

x86 machines use the usual 64 bits in memory, but use 80 bits in the
floating-point unit.  Although this sounds like it should be as good or
better, in fact there are sometimes problems when converting back from
80 bits to the 64 bits used in memory.  Most computational researchers
that I know consider the 64-bit approach (called 'double precision
format' by IEEE) more numerically stable, though most of the time it
doesn't much matter.

The point is that the two types of floating-point units can sometimes
give different results, even though both are complying with IEEE
standards for floating-point behavior.  Most of the time the differences
would be pretty minor, as in your case, where both did give correct

In your case, my guess is that at some point when performing a pivot,
the slight differences were enough to have them choose different
pivots.  Perhaps the two choices were in fact 'tied' if considered using
exact arithmetic.

A good article on these issues is "What Every Computer Scientist Should
Know About Floating-Point Arithmetic" by David Goldberg.  Reading or
skimming the article would be helpful, though what you're really
interested in is in an addendum entitled "Differences Among IEEE 754
Implementations".  Both can be found at


Brady Hunsaker
Assistant Professor
Industrial Engineering
University of Pittsburgh

reply via email to

[Prev in Thread] Current Thread [Next in Thread]