help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Optimisation functions missing?


From: heberf
Subject: Re: Optimisation functions missing?
Date: Thu, 4 Mar 1999 13:24:56 -0600 (CST)

Ok, we need to get together and write this stuff.  This is not hard.  the 
traditional way to solve an LP problem is with the simplex method and there is 
a 
similar thing for QP problems.

I recommend we try something a little more modern.  If someone would like to 
join me (or several someones) we could have something in a short amount of 
time. 
Let me give you a little introduction to modern optimization (I'm no expert but 
I've read some recent stuff and taken a class or two a few years back).  When 
octave first came out we talked about using NPSOL but the license was a 
problem. 
The truth is that NPSOL uses a very old algorithm called "sequential quadratic 
programming".  I'm thinking we can do better than this.

The problem with free software for optimization is the contraints.  
Unconstrained optimation is easy and there is lots of free code for doing that. 
 
Modern optimization techniques transform constrained problems to unconstrained 
problems.  The way you do it is to add a "penalty function" which is close to 
zero on the inside of the feasible region and rises sharply near the edge.  
There is a parameter which controls how steeply the function rises at the edge 
and how close to zero it is in the middle so that in the limit the function is 
equal to the true objective inside the feasible region and is infinity 
elsewhere.  The first step is to solve the problem for a low value of this 
parameter.  This will give you a solution which lies strictly on the interior 
of 
the feasible region.  For this reason these methods are called "interior point 
methods".  Now imagine solving the problem for all possible values of this 
parameter.  This traces out a path of solutions called the "central path".  The 
trick comes in deciding how close you need to get to solving these intermediate 
problems and how much to increase the parameter between each intermediate 
problem (long step vs. short step).  In other words you decide how close you 
are 
going to stay to the central path.

The advantage of these methods is two-fold.  

        1.  They are easy since unconstrained problems are easy to code.  LP 
and 
QP problems are particularly easy since the gradient of the objective + penalty 
is known.
        2.  They scale well in the sense that as the number of choice variables 
and constraints increases (problem complexity) the number of function 
evaluations it takes to find a solution increases rather slowly (at polynomial 
rather than exponential rate as was the case for the classical methods).
        
        Point 2 brings up the issue of large problems.  These methods will work 
well for large problems but usually these problems involve spare matrices.   In 
order to have a first rate package we will need this functionality eventually.
        
Anyone want to help me?

P.S. If you want to look at an example read Anstreicher et. al. in Algorithmica 
(1993) vol 10 Number 5, November, Pg 365-382 which lays out an algorithm for QP 
problems in sufficient detail that we could code it up quick.


_______________________________________________________________________________
Heber Farnsworth
Assistant Professor of Finance
John M. Olin School of Business
Washington University
Campus Box 1133                                 phone: (314) 935-4221   
One Brookings Drive                             FAX: (314) 935-6359
St. Louis, MO 63130-4899                




reply via email to

[Prev in Thread] Current Thread [Next in Thread]