How to get the best out of optimisation software
Finding the least costly, the most profitable, or the fastest solution to a problem is of interest to a wide range of disciplines. Given some function of the variables of interest, along with constraints on those variables, and some explicit data, how can one best obtain the optimum solution? Search spaces in global optimisation problems tend to be vast, noisy, and have many discontinuities. Generally, it is not possible to try every point in the search space in a reasonable time on a finite machine. Consequently, many software producers have developed specialised routines to reduce the number of search points and search directions. No single optimisation package can solve all optimisation problems. This article offers a synoptic view of some of the main packages that are commercially available. For those who wish to go more deeply into the subject, an excellent guide to purchasing such software and a list of frequently asked questions is on Professor Robert Fourer's web site. Useful introductory texts are Pinter and Bhatti.
Mathematica 5.0, the newly released version of this software from Wolfram, offers a much wider range of options for solving optimisation problems than previous versions. Faster algorithms that are also more accurate and powerful are now included, e.g. the interior point method. For linear problems, LinearProgramming[ ] can find global minima for linearly constrained linear functions and local minima for arbitrary functions. Indeed, the linear programming solver ConstrainedMin[ ] is faster at solving linear models than some global optimisation routines (see below).
NMinimize[ ] is a new, general-purpose command that implements a choice of four routines for finding constrained or unconstrained global minima. (Another very similar function, NMaximise, finds global maxima.) These choices are briefly as follows. 'Differential Evolution' is a genetic algorithm that is very robust but, like all algorithms that need to retain large numbers of data points, can also be slow. 'Nelder Mead' is an implementation of the, by now, classic simplex algorithm of that name. 'Random Search', the simplest of the new options, creates a large set of starting points for the FindMinimum[ ] function. 'Simulated Annealing' generates a random set of starting points and then, for each of these points, picks a random direction in which to move. In Mathematica 5.0, new variable metric algorithms have been added e.g. BFGS.
Minimize[ ] finds the exact global minimum for a function over a region, with or without constraints. If the function and the constraints are linear or polynomial, Minimize will always find a global minimum.
All of these commands are built into the standard Mathematica 5.0 package and they are more than adequate for routine global or local optimisation calculations. However, fiendishly difficult functions may give them problems.
Math Optimizer 1.0/Math Optimizer Professional
Math Optimizer 1.0 is an add-on for the main Mathematica program. It is designed primarily to solve the most general class of constrained optimisation problems. It has two 'solvers'; one for global optimisation problems and another that may be used for precise local optimisation. A solver integrator called Optimize[ ] allows these two packages to be used either separately or in a combined attack on a problem. The packages can be used in a variety of operational modes and they can be tuned to a specific problem by adjusting various operating parameters. These routines may be applied to linear or non-linear problems defined by a finite number of real-valued, continuous functions over a finite n-dimensional region.
Generally, the solution process advocated by this product is a prudent combination of global search followed by an efficient local search. The global optimisation solver uses a statistical bounding procedure to give an estimate of the global optimum on a given range, based on global sampling results. An adaptive stochastic search method then refines this estimate. The second local solver is based on a constrained non-linear (convex) programming approach and can be used for precise local optimisation based on a given initial solution.
MathOptimizer 1 is currently aimed at very difficult, continuous, non-linear problems. It is not targeted at models with discrete variables although such modules are under development.
Math Optimizer Professional uses a suite of global and local search algorithms called the 'Lipschitz Global Optimizer' (LGO). The optimisation problem is first formulated in Mathematica, then it is automatically translated into C. LGO analyses the problem and displays the solution in the Mathematica notebook. LGO has been used to solve models with up to one thousand variables and constraints.
These packages are developed by J. D. Pinter, who, since doing his PhD (1982 - Moscow State University) in optimisation, has become an internationally known expert in the field. One of his textbooks has won an international award (INFORMS Computing Society Prize for Research Excellence) and he is due to publish another shortly. A new version of MathOptimizer is due for release in August, with integer variable handling and 'black box' system analysis capabilities.
Global Optimization 4.2
Craig Loehle's Global Optimization 4.2 consists of a varied and comprehensive suite of 10 optimisation functions designed for constrained and unconstrained global non-linear optimisation. Each program has adjustable parameters so it can be tuned to the problem in hand. For some functions the constraint sets can be either non-linear or even disjoint. Each of these functions is designed for specific types of problem, but generally they are for large (200+ variables), noisy, non-linear problems with local minima, multiple minima, with or without constraints. They are robust to noisy functions and to local minima.
A brief summary of the commands and their scope of application: GlobalSearch[ ], GlobalPenaltyFunction[ ], and MulitStartMin[ ] are general non-linear solvers. They are multiple start, generalised hill-climbing algorithms. GlobalSearch[ ] is the general-purpose optimiser. GlobalPenaltyFunction[ ] handles the special case of constraints that are not analytic, or that are black box. MulitStartMin[ ] is a special case version of GloalSearch[ ] designed for problems with up to 15 integer or discrete variables or that are highly non-linear.
In some problems, a region rather than a point better describes the solution. GlobalMinima[ ] uses an Adaptive Grid Refinement algorithm to find such solution regions for problems with up to 14 variables. IntervalMin[ ] is a general minimiser that uses interval methods.
InterchangeMethodMin[ ] and TabuSearchMin[ ] are for 0-1 integer problems. Such problems arise, for example, in networks, transportation, scheduling, and capital allocation where the variables represent quantities that come only in standard amounts. These functions maximise a linear or non-linear function of integer 0-1 variables, either with or without constraints.
GlobalSearch[ ] is the engine used for two other functions: NLRegression[ ] and MaxLiklihood[ ]. The output of the NLRegression[ ] procedure is a table of confidence intervals on the parameters, sensitivity plots, the fit parameter values and the fit statistics. MaxLiklihood[ ] is an estimation procedure that attempts to maximise the log - likelihood of a function with respect to a data set. MaxAllocation[ ] is designed for allocation problems such as are found in finance. Investment allocation and derivative construction to hedge an investment are examples of such problems.
The suite of 10 programs offered by Global Optimization 4.2 is very comprehensive and addresses a wide range of practical problems. The algorithms are fast and robust and they can solve very tough problems very quickly.
MatLab 6.5 Release 13 and MatLab's Optimization Toolbox 2.2
The MatLab 6.5 Optimization Toolbox 2.2 is an add-on for the MatLab package. It contains functions to solve constrained and unconstrained non-linear minimisation, quadratic or linear programming. Solution procedures for large-scale problems that are sparse and structured are a special feature of this Toolbox. One new feature in version 2.2 is an improved default algorithm for the function 'fsolve'. The new algorithm has improved convergence properties over the previous default method.
MatLab's optimisation functions are designed to find local minima and they can be fooled by local minima, especially by oscillatory functions. They will only find a global minimum if it is the only minimum and the function is continuous.
However, Optimization Toolbox 2.2 provides a formidable range of 17 local optimisation functions, and MatLab's computing environment lends itself readily to the solution of these continuous optimisation problems. In many cases of practical interest, such locally optimal solutions are more than adequate. Also, the local optimum found with one run of the package can be used as the starting point for another run until a global optimum is obtained, if that is what is required.
In this Toolbox there are also algorithms to solve non-linear equations and curve fitting problems. The algorithms designed to solve large-scale linear programming problems with sparse matrices are very impressive and form a special feature of Optimization 2.2. One such algorithm, 'linprog', can solve a model with 1677 variables and 627 linear equality constraints in seconds. Another, 'fminunc', can find, for example, the value of 'x' to minimise:
where n = 1000, in eight iterations.
MatLab's unrivalled interface, and the compre-hensive range of other Toolboxes that can be used in conjunction with the Optimization Toolbox, make this the product of choice for many workers in system modelling.
Global Optimization 4.2 and Math Optimizer 1.0 were applied to several global optimisation problems. Some were standard problems and some were from my own research. Two of the standard problems are Trefethen's problem 4: find the global minimum of:
and Trefethen's problem 9: find the value of 'a' that maximises:
In all cases, both packages solved these problems easily. Indeed, Mathematica 4.2's built in function NMinimize[ ] solved the first problem with the 'Differential Evolution' mode. However, the built in functions tend to be slower in general. This could be significant in large complex problems. For these particular problems the differences in timing are negligible. Provided the solution is 'good enough' and is attained in a 'reasonable time' then the algorithm works!
J. D. Pinter, Computational Global Optimization in Nonlinear systems - an interactive tutorial: Lionhart Publishing, 2001.
M. A. Bhatti, Practical Optimization Methods with Mathematica Applications: Springer-Verlag, 2000.
J. D. Pinter, Global Optimization in action: Kluwer Academic Publishers, 1996.
J. D. Pinter, Global Optimization: Scientific and Engineering Applications: Elsevier, to be published.
N. Trefethen, 'The hundred dollar hundred digit challenge,' SIAM News, January-February 2002.