DAVID W. DELEY
[.DAVID.OPTIMIZE]    (OS independent.  Code should be easily transportable to other systems.)

OPTIMIZATION

Concept:

Given a scalar function F, which is a function of several scalar
variables {X(1),X(2),...,X(n)}, find the values of {X(1),X(2),...,X(n)}
which MINIMIZE the value of function F.


Example:

Find {X(1),X(2)} which minimizes the function:
  _
F(X) = 10 * X(1)**4 - 20 * X(1)**2 * X(2) + 10 * X(2)**2 + X(1)**2 - 2*X(1) + 5
       (the banana function)

Technique:

If you're not familiar with optimization techniques you may wish to
check out a book on the subject.  The references below give some
examples.

The general technique is to start at an arbitrary point X{0} (vector),
and search in a given direction S{0} (vector) for a minimum.  When the
minimum is found, that becomes our new point X{1} (vector).

              X{1} = X{0} + ALPHA*S{0}

We then repeat the process, starting at X{1}, searching this time in a
new direction S{1} until we have found a minimum in the new S{1}
direction, and making that our new point X{2}.

There are numerous techniques for choosing the next search direction and
for searching along the search direction for a minimum.  The worst
technique is to just start over again and search in the direction of
steepest descent.  This is the technique most people start out with
being simple in concept, but it can quickly bog down and become
exceedingly slow.

Two better techniques are provided in OPTI2.FOR for your use.  One is
the Conjugate Direction Method of Fletcher and Reeves (CDM), and the
other is POWELL's method.  POWELL's method was used to find the optimum
coefficients for the spline in program NEXTPRIME.FOR in directory [.PRIMES] .

For an example of how these methods are better, I originally tried to
solve the find the optimum coefficients for the spline in program
NEXTPRIME.FOR in directory [.PRIMES] by using the steepest descent
method, because it was quick and easy to write and I didn't have this
optimization code written yet.  I let the program run for several hours
trying to squeeze out the best possible results.  Later when this
optimization code was written I reran the problem using POWELL's method.
It took less than a minute and converged to a better answer than I had
obtained after hours of steepest descent work.

The technique provided in OPTI2.FOR for a one dimensional search along
the S{} direction is the Golden Section Method.  A brief discussion of
the Golden Section Method is provided in subroutine GOLDEN_SECTION.

To optimize a function the top level routine must provide the following:
    1.  The function F to be minimized.  The function is called EVAL.
        It's purpose when called is to evaluate the function at the
        point X (vector) which is passed to it as a parameter.

    2.  An initial point X (vector) to start the search at.

    3.  NDV - The number of design variables, or the dimension of vector X.

    4.  STEP - A step size to use when searching for bounds for the
        golden section method.  Too small a step size will take too
        long, too large a step size may be too coarse.

    5.  MAXSTEP - Maximum number of steps for one-dimensional search.
        A safety catch in case the search for bounds doesn't find the
        bounds in a reasonable amount of time.

    6.  ARES - Alpha resolution.  Determines how close to the minimum
        the golden section search must get before it can terminate and
        say it's there.  Too small may cause golden section search to
        take too long.  Too large may cause it to terminate prematurely.

    7.  DS - Delta step size.  Used by subroutine GRADIENT for the CDM
        method only.  The step size to take when determining the
        gradient of the function.  The step size must be large enough so
        a gradient can be detected, but not so large that the local
        gradient is missed.

    8.  MAXIT - Maximum number of iterations.  A safety catch in case the
        search doesn't converge on the minimum and terminate normally.


Example:

File OPTI2.FOR, PROGRAM MYMAIN, is currently set up to solve the
minimization problem presented in routine NEXTPRIME.FOR in directory
[-.PRIMES].  That problem is further discussed in file NEXTPRIME.FOR .
For this problem we have a real function EVAL which when given our
design vector X and the number of design variables NDV will return for
us a real number.  This number we wish to make as small as possible.  We
choose an arbitrary starting point for vector X, and choose some
appropriate values for the other variables.  For technical reasons
discussed below we choose to use the POWELL method of optimization.
Then run it, and be impressed by how quickly it zeros in on the optimum
values for X.

To run this example:
      $ FORTRAN OPTI2,FCNPRIM
      $ LINK OPTI2,FCNPRIM
      $ RUN OPTI2

The output tells what the current X vector is and what the current value
of our objective function EVAL which we wish to make as small as
possible is.

The rest is somewhat hands on experience.  We choose to use the POWELL
method here because our objective function EVAL looks like a staircase
instead of a smooth function.  The CDM method uses gradients whereas the
POWELL method does not.  This makes the CDM method more efficient in
some situations, but the evaluation of a gradient on a staircase
function is not feasible.

Also note that some scaling of variable X(3) is done in routine EVAL.
Turns out that function EVAL is very sensitive to small changes in X(3),
so we scale that variable to make the sensitivity of it about the same
as for the other two variables X(1) and X(2).


Example #2:

To solve the banana function given at the beginning, replace in file
OPTI2.FOR the main routine and EVAL function (and FCNOBJ) with the
following:

  PROGRAM MYMAIN
  IMPLICIT NONE
  PARAMETER NDV = 2   !Number of design variables
  INTEGER MAXSTEP,MAXIT
  REAL XOUT(NDV),X(NDV),STEP,ARES,DS,S(NDV),OBJ
  EXTERNAL EVAL
  REAL EVAL
  STEP = 0.1      !Step size when stepping to find bounds
  MAXSTEP = 700     !Maximum number of steps before abort
  ARES = 0.001      !alpha resolution
  DS = 0.001      !delta step size when determining slope
  MAXIT = 3     !Number of restarts to do

  X(1) = -4.0     !Initial X values
  X(2) = -4.0
  OBJ = EVAL(X,NDV)
  WRITE(6,99) OBJ
  WRITE(6,101) X
99  FORMAT( ' INITIAL CONDITION.  OBJ = ', F )
101 FORMAT( F )

  CALL CDM(X,NDV,STEP,MAXSTEP,ARES,DS,MAXIT,EVAL)   !Main call
C CALL POWELL(X,NDV,STEP,MAXSTEP,ARES,MAXIT,EVAL)
  STOP
  END

  REAL FUNCTION EVAL(X,NDV)   !Function to be minimized
  IMPLICIT NONE
  INTEGER NDV
  REAL X(NDV)
  EVAL = 10 * X(1)**4 - 20 * X(1)**2 * X(2) + 10 * X(2)**2
     *    + X(1)**2 - 2*X(1) + 5
C       (The banana function)
  RETURN
  END

This time there are only two design variables, X(1) and X(2).  And we
have chosen to use the CDM method instead of the POWELL method.  Either
one should work well.  And we've increased MAXIT (maximum number of
iterations) to 20.  Try running this and you'll see it very quickly
comes close to the correct values of X(1) = 1 and X(2) = 1 where the
objective function is minimized at a value of 4.


REFERENCES:

There are other methods besides POWELL's and CDM discussed in the
literature.  VMA ENGINEERING listed below sells software which uses more
sophisticated algorithms for optimization, for those really big jobs
where you can't wait all year for a less efficient algorithm to do the
job.

Vanderplaats, Garret N.  "Numerical Optimization Techniques for
   Engineering Design:  With Applications"  McGraw-Hill Book Company  1984

Fox, Richard L.  "Optimization methods for engineering design"
   Reading, Mass., Addison-Wesley Pub. Co. [1971].  Series title:
   Addison-Wesley series in mechanics and thermodynamics.

Aoki, Masanao  "Introduction to Optimization Techniques: Fundamentals
   and Applications of Nonlinear Programming."  The Macmillian Company,
   New York.  1971.

Luenberger, David G.  "Linear and Nonlinear Programming" second edition.
   Addison-Wesley Publishing Company.  1984.


The following company specializes in providing advanced state of the art
optimization programs and consulting (the president was a former college
professor of mine):

   VMA ENGINEERING  (New address as of Sept. 1994)
   1767 South 8th street Suite M-200
   Colorado Springs, CO 80906
   (719)473-4611
   (719)473-4638 FAX
   Internet: vanderplaats@vma.com


--------------------------------------------------------------------------------
  PROGRAM MYMAIN
	IMPLICIT NONE
	PARAMETER NDV = 3		!Number of design variables
	INTEGER MAXSTEP,MAXIT
	REAL XOUT(NDV),X(NDV),STEP,ARES,DS,S(NDV),OBJ
	EXTERNAL EVAL
	REAL EVAL
	STEP = 0.1			!Step size when stepping to find bounds
	MAXSTEP = 700			!Maximum number of steps before abort
	ARES = 0.001			!alpha resolution
	DS = 0.001			!delta step size when determining slope
	MAXIT = 3			!Number of restarts to do

	X(1) = 4.2			!Initial X values
	X(2) = 0.2
	X(3) = -0.3
	OBJ = EVAL(X,NDV)
	WRITE(6,99) OBJ
	WRITE(6,101) X
99	FORMAT( ' INITIAL CONDITION.  OBJ = ', F )
101	FORMAT( F )

C	CALL CDM(X,NDV,STEP,MAXSTEP,ARES,DS,MAXIT,EVAL)		!Main call
	CALL POWELL(X,NDV,STEP,MAXSTEP,ARES,MAXIT,EVAL)
	STOP
	END

	REAL FUNCTION EVAL(X,NDV)		!Function to be minimized
	IMPLICIT NONE
	REAL FCNOBJ
	INTEGER NDV
	REAL X(NDV)
	EVAL = FCNOBJ( X(3)/10000.0, X(2), X(1) )
	RETURN
	END

	REAL FUNCTION FCNOBJ( A3, A2, A1 )
C	find error for every value of I from 4 to 997
C	total up error
	INTEGER FCNPRIM,ER,TOTER
	TOTER = 0
	DO I=4,997
	POLY = A3*(I**2) + A2*I + A1
	IFC = FCNPRIM(I)
	FFC = FLOAT(IFC)
	ER = ABS( INT(POLY) - IFC )
	TOTER = TOTER + ER
	ENDDO
	FCNOBJ = FLOAT(TOTER)
	RETURN
	END

C------------------------------------------------------------------------------

	SUBROUTINE NORMALIZE(VN,VDOTV,V,NDV)
C  Function:
C	Normalize vector V returning normalized vector VN and scalar VDOTV
C	Evaluate:
C		{VN} = 1/({V} dot {V}) * {V}
C	where:
C		{V}  = column vector to be normalized
C		{VN} = normalized column vector
C		{V} dot {V} = inner product of vector {V} with itself
C
C  Arguments:
C	VN    - out. Normalized vector.  Result of normalizing {V}.
C	             Real array of dimension NDV.
C	VDOTV - out. Real scalar.  The scale value used to normalize the
C	             vector V.  The magnitude of vector V.
C	V     - in.  Vector to be normalized.  Real array of dimension NDV.
C	NDV   - in.  Integer.  Dimension of vector V  (Number of Design Variables)
C
	IMPLICIT NONE
	INTEGER NDV
	REAL VN(NDV),V(NDV),VDOTV
	INTEGER I

	VDOTV = 0.0
	DO I=1,NDV
	  VDOTV = VDOTV + V(I)*V(I)
      ENDDO
	VDOTV = SQRT( VDOTV )
	IF (VDOTV .EQ. 0.0) RETURN		!zero vector
	DO I=1,NDV
	  VN(I) = V(I)/VDOTV
	ENDDO
	RETURN
	END
C------------------------------------------------------------------------------

	SUBROUTINE SCALARMULT(AV,V,ALPHA,NDV)
C  Function:
C	Multiply vector V by scalar ALPHA, returning vector AV.
C	Evaluate:
C		{AV} = ALPHA*{V}
C	where:
C		{AV}  = column vector
C		{V]}  = column vector
C		ALPHA = scalar
C
C  Arguments:
C	AV    - out. Result vector of multiplying ALPHA*V.
C	V     - in.  Vector.  Real array of dimension NDV.
C	ALPHA - in.  Real scalar variable
C	NDV   - in.  Integer.  Dimension of vector V  (Number of Design Variables)
C
	IMPLICIT NONE
	INTEGER NDV
	REAL AV(NDV),V(NDV),ALPHA
	INTEGER I
	DO I=1,NDV
	  AV(I) = ALPHA*V(I)
	ENDDO
	RETURN
	END
C------------------------------------------------------------------------------

	SUBROUTINE ADDVEC(VR,V1,V2,NDV)
C  Function:
C	Add two vectors together.  Add vector V1 to vector V2 and return
C	vector VR.
C	Evaluate:
C		{VR} = {V1} + {V2}
C	where
C		{V1} = column vector
C		{V2} = column vector
C		{VR} = column vector
C
C  Arguments:
C	VR  - out. Result vector of adding V1 + V2
C	V1  - in.  Vector #1.  Real array of dimension NDV
C	V2  - in.  Vector #2.  Real array of dimension NDV
C	NDV - in.  Integer.  Dimension of vectors V1, V2, and VR.  (Number of Design Variables)
C
	IMPLICIT NONE
	INTEGER NDV
	REAL V1(NDV),V2(NDV),VR(NDV)
	INTEGER I
	DO I=1,NDV
	  VR(I) = V1(I) + V2(I)
	ENDDO
	RETURN
	END
C------------------------------------------------------------------------------

	SUBROUTINE COPYVEC(VOUT,VIN,NDV)
C  Function:
C	Copy one vector to another.
C	Perform:
C		{VOUT} = {VIN}
C	where
C		{VOUT} = column vector
C		{VIN}  = column vector
C
C  Arguments:
C	VOUT - out. Real array of dimension NDV
C	VIN  - in.  Real array of dimension NDV
C	NDV - in.   Integer.  Dimension of vectors VIN and VOUT.  (Number of Design Variables)
C
	IMPLICIT NONE
	INTEGER NDV
	REAL VOUT(NDV),VIN(NDV)
	INTEGER I
	DO I=1,NDV
	  VOUT(I) = VIN(I)
	ENDDO
	RETURN
	END
C------------------------------------------------------------------------------

	SUBROUTINE DOT(A,X1,X2,NDV)
C  Function:
C	Calculates the dot product of {X1} and {X2}
C	Calculates:
C		a = {X1}*{X2}
C	where:
C		a     - a real scalar
C		{X1}  - a vector of dimension NDV
C		{X2}  - a vector of dimension NDV
C
C  Arguments:
C	A   - out.   Scalar.  Real.
C	X1   - in.   Input vector.  Real array of dimension NDV
C	X2   - in.   Input vector.  Real array of dimension NDV
C	NDV  - in.   Integer.  Dimension of vectors X1 and X2.
C
	IMPLICIT NONE
	INTEGER NDV
	REAL A,X1(NDV),X2(NDV)
	INTEGER I
	A = 0.0
	DO I=1,NDV
	  A = A + X1(I)*X2(I)
	ENDDO
	RETURN
	END
C------------------------------------------------------------------------------

	SUBROUTINE MSHIFTL(H,NDV)
C  Function:
C	Matrix SHIFT Left.  Shift each column of the H matrix one to
C	the left, eliminating the first row.
C
C  Arguments:
C	H   - modify.  An NDV x NDV matrix.  A real 2 dimensional array.
C	NDV - in.  The dimension of matrix H.
C
C Outline:
C	Move down column 1 of matrix H copying over the contents of column 2.
C	Repeat for each column copying over the contents of column+1.
C	Finish with column = NDV-1.  Leave the last column unchanged.
C	Note:  We move down columns instead of across rows because that
C	is the natural way FORTRAN stores multidimensional arrays in memory,
C	with the first subscript changing most often.
C
	IMPLICIT NONE
	INTEGER NDV
	REAL H(NDV,NDV)
	INTEGER I,J
	DO J=1,NDV-1
	  DO I=1,NDV
	    H(I,J) = H(I,J+1)
	  ENDDO
	ENDDO
	RETURN
	END
C------------------------------------------------------------------------------

	SUBROUTINE COLVECH(V,N,H,NDV,MAXNDV)
C  Function:
C	Return the Nth column vector of matrix H.
C
C  Example:       +-------------- first column
C	          |   +---------- second column
C	          |   |   +------ third column
C	          v   v   v
C	         a11 a12 a13
C	     H = a21 a22 a23
C	         a31 a32 a33
C
C  Arguments:
C	V   - out.  Vector, the Nth column of H.  A real array of dimension NDV.
C	N   - in.   The column number to return.  Integer.
C	H   - in.   A square matrix of dimension NDV x NDV.  A real 2d array.
C	NDV - in.   The logical dimension of H and V.  Integer.
C	MAXNDV - in. The physical dimension of H.  Used to declare array H.
C
	IMPLICIT NONE
	INTEGER MAXNDV,NDV,N
	REAL H(MAXNDV,MAXNDV),V(NDV)
	INTEGER J
	DO J=1,NDV
	  V(J) = H(J,N)
	ENDDO
	RETURN
	END
C------------------------------------------------------------------------------

	SUBROUTINE ALPHAX(XOUT,ALPHA,X,S,NDV)
C  Function:
C	Outputs vector XOUT given an initial X vector and an ALPHA step.
C	Calculates:
C	    {XOUT} = {X} + alpha*{S}
C	where
C	    {XOUT} - column vector
C	    {X}    - column vector
C	    alpha  - scalar
C	    {S}    - column vector
C
C  Arguments:
C	XOUT - out. New point
C	ALPHA- in.  Real scalar.  Value of alpha.
C	X    - in.  Initial set of design variables.  Real array of
C	            dimension NDV.  This is used as our starting point.
C	S    - in.  Normalized unit vector defining the search direction
C	            to search in.  Real array of dimension NDV.
C	NDV  - in.  Number of Design Variables.  Integer dimension of
C	            arrays X, S, and matrix H.
C
	IMPLICIT NONE
	INTEGER NDV
	REAL XOUT(NDV),X(NDV),S(NDV),ALPHA
	PARAMETER MAXNDV = 4
	REAL TMP(MAXNDV)
	CALL SCALARMULT(TMP,S,ALPHA,NDV)	!alpha*{S}
	CALL ADDVEC(XOUT,TMP,X,NDV)		!{X} + alpha*{S}
	RETURN
	END
C------------------------------------------------------------------------------

	SUBROUTINE FIND_BOUNDS(AUPPER,ALOWER,X,S,NDV,STEP,MAXSTEP,EVAL)
C  Function:
C	Find upper and lower bounds for ALPHA between which the objective
C	function contains a minimum when searching in direction S.
C	Find bounds on alpha such that:
C
C		EVAL( {X} + alpha*{S} )
C
C	contains a local minimum between the found bounds.
C
C Arguments:
C	AUPPER - out.  Real scalar.  Upper bound on alpha.
C	ALOWER - out.  Real scalar.  Lower bound on alpha.
C	X      - in.  Initial set of design variables.  Real array of
C	              dimension NDV.  This is used as our starting point.
C	S      - in.  Normalized unit vector defining the search direction
C	              to search in.  Real array of dimension NDV.
C	NDV    - in.  Number of Design Variables.  Integer dimension of
C	              arrays X, S, and STEP.
C	STEP   - in.  Step size for each design variable to use.  Real Scalar
C	MAXSTEP- in.  Maximum number of steps to take before aborting.  Integer.
C	              If bounds on the minimum are not found within MAXSTEP
C	              steps, then execution aborts with an error message.
C	              This variable is intended to prevent an infinite
C	              loop condition.
C	EVAL   - in.  Function which evaluates the user's objective function
C	              at X and returns the functional result as a real scalar
C	              value.  This is declared as an EXTERNAL routine.
C	               The routine is called as follows:
C	                    RESULT = EVAL(X,NDV)
C	               where:
C	                    X - Array of design variables the objective
C	                        function is to be evaluated at.
C	                    NDV - Number of design variables.  The dimension
C	                          of array X.
C	                    RESULT - Real scalar.
C
C  Outline:
C	1.  Evaluate function at X.
C	2.  Step STEP in S direction and evaluate function there
C	      to determine slope.
C	3.  Begin making unit steps in downward direction until
C	    function starts sloping up again.
C
	IMPLICIT NONE
	INTEGER NDV,MAXSTEP
	REAL X(NDV),S(NDV),STEP,AUPPER,ALOWER
	EXTERNAL EVAL
	REAL EVAL

	PARAMETER MAXNDV = 4
	REAL OBJ1,OBJ2			!objective function at points 1,2,3
	REAL ALPHA,ASTEP		!alpha and alpha step direction
	REAL XP(MAXNDV)			!test point for objective function
	INTEGER NUMIT
	logical go /.true./

C	1.  Evaluate objective function at X:
	OBJ1 = EVAL( X, NDV )

C	2.  Step STEP in S direction and evaluate
C	    objective function at ( {X} + alpha*{S} )
	ASTEP = STEP
	ALPHA = ASTEP
	CALL ALPHAX(XP,ALPHA,X,S,NDV)		!{XP} = {X} + alpha*{S}
	OBJ2 = EVAL( XP, NDV )

C	Determine search direction:
	IF (OBJ2 .GT. OBJ1) ASTEP = -ASTEP	!then search in other direction

C	  3.  Loop making steps in downward direction until
C	      objective function starts sloping up again.
C	      a.  Step in S direction and evaluate
C	            objective function at ( {X} + alpha*{S} )
C	      b.  See if objective function has started sloping up again
C	           i.  exitloop with error if MAXSTEP exceeded
C	               (mamimum iterations allowed while attempting to find
C	                bounds.)
C	           ii. exitloop if it is sloping up again
C	          iii. else take another step
	DO NUMIT = 1,MAXSTEP
	  ALPHA = ALPHA + ASTEP
	  CALL ALPHAX(XP,ALPHA,X,S,NDV)		!{XP} = {X} + alpha*[H]{S}
	  OBJ2 = EVAL( XP, NDV )
	if (.not. go) then
	write(13,1) NUMIT, OBJ2, XP(1), XP(2)
1	format( 1x, I4, F, F, F )
	endif

C	  See if objective function has started sloping up again:
	  IF (OBJ2 .GT. OBJ1 .AND. GO) THEN
C	    The bounds are ALPHA and (ALPHA two steps ago)
C	    Set ALOWER to the lower bound and AUPPER to the upper bound
	    IF (ASTEP .GT. 0) THEN
	      AUPPER = ALPHA
	      ALOWER = ALPHA - 2.0*ASTEP
	    ELSE
	      ALOWER = ALPHA
	      AUPPER = ALPHA - 2.0*ASTEP
	    ENDIF
	    RETURN				!Normal exit here
	  ELSE
C	    Else discard X1 and OBJ1 and take another step
	    OBJ1 = OBJ2
	  ENDIF
	ENDDO
C  If we drop out the bottom then the maximum number of iterations
C  was exceeded.
	PRINT*, ' Maximum steps exceeded while attempting to find bounds'
	PRINT*, ' for one dimensional search.  Increase STEP size, increase'
	PRINT*, ' maximum number of steps MAXSTEP, or function may not'
	PRINT*, ' have a minimum nearby.'
	STOP
	END
C------------------------------------------------------------------------------

	SUBROUTINE GOLDEN_SECTION(ALPHA,X,OBJ,AUPPER,ALOWER,S,NDV,ARES,EVAL)
C  Function:
C	Find the minimum of the objective function EVAL in direction S
C	Find alpha which minimises the following:
C
C		EVAL( {X} + alpha*{S} )
C
C Arguments:
C	ALPHA  - out. ALPHA which minimizes EVAL in S direction.  Real scalar.
C	X      - modify.  On inupt, initial set of design variables.
C	              On output, final set of design variables which minimizes
C	              EVAL in S direction.  Real array of dimension NDV.
C	OBJ    - out. Value of Objective function EVAL at returned point X
C	AUPPER - in.  Real scalar.  Initial upper bound on alpha.
C	ALOWER - in.  Real scalar.  Initial lower bound on alpha.
C	S      - in.  Normalized unit vector defining the search direction
C	              to search in.  Real array of dimension NDV.
C	NDV    - in.  Number of Design Variables.  Integer dimension of
C	              arrays X and S.
C	ARES   - in.  Alpha resolution.  Real scalar.  When the change in
C	              alpha is less than this amount, convergence is declared
C	              and we return.
C	EVAL   - in.  Function which evaluates the user's objective function
C	              and returns the functional result as a real scalar
C	              value.  This is declared as an EXTERNAL routine.
C	               The routine is called as follows:
C	                    RESULT = EVAL(X,NDV)
C	               where:
C	                    X - Array of design variables the objective
C	                        function is to be evaluated at.
C	                    NDV - Number of design variables.  The dimension
C	                          of array X.
C	                    RESULT - Real scalar.
C
C Background:
C	Assume we have a function F of one variable X and we wish to
C	find the minimum of this function.  The golden section method
C	is an algorithm for determining the value X which minimizes a
C	one-variable function.
C
C	Assume lower and upper bounds on X are known which bracket the
C minimum.  Now pick two intermediate points X1 and X2 with X1 <
C	X2 and evaluate the function at these points.  Assuming the
C	function is unimodal within the range being tested, either X1
C	will become a new lower bound or X2 will become a new upper
C	bound.
C
C	Because after our initial choice of X(lower), X(upper), and
C	X1, we will require one function evaluation for each
C	iteration, it follows that the most efficient algorithm is one
C	which will reduce the bounds by the same fraction on each
C	iteration.
C
C	If X(lower) = 0 and X(upper) = 1, it turns out the optimum
C	choice of X1 and X2 is:
C		X1 = 0.38197
C		X2 = (1 - X1) = 0.61803
C
C	The ratio of X1 to X2 is the famous "golden ratio".
C
C Outline:
C	1.  Calculate two intermediate points and evaluate:
C	        alpha1 = (1-tau)*alpha_lower + tau*alpha_upper
C	        X1 = X(alpha1)
C	        OBJ1 = EVAL(X1)
C	        alpha2 = tau*alpha_lower + (1-tau)*alpha_upper
C	        X2 = X(alpha2)
C	        OBJ2 = EVAL(X2)
C 2.  LOOP until (alpha_upper-alpha_lower < ARES)
C       If OBJ1 > OBJ2 then
C	        alpha_lower = alpha1		!else X1 becomes new lower bound
C	        alpha1 = alpha2
C	        alpha2 = (1-tau)*alpha_lower + tau*alpha_upper
C	        X2 = X(alpha2)
C	        OBJ2 = EVAL(X2)
C	      Else
C	        alpha_upper = alpha2 	!then X2 becomes new upper bound
C	        alpha2 = alpha1
C	        alpha1 = (1-tau)*alpha_lower + tau*alpha_upper
C	        X1 = X(alpha1)
C	        OBJ1 = EVAL(X1)
C	      Endif
C	    ENDLOOP
C     use average of alpha_upper & alpha_lower as solution.
C
	IMPLICIT NONE
	INTEGER NDV
	REAL X(NDV),S(NDV)
	REAL AUPPER,ALOWER,ARES,ALPHA,OBJ
	EXTERNAL EVAL
	REAL EVAL

	PARAMETER MAXNDV = 4
	REAL X1(MAXNDV),X2(MAXNDV)
	REAL ALPHA1,ALPHA2,ALPHA_L,ALPHA_U	!alphas for X1,X2,X(lower),X(upper)
	REAL OBJ1, OBJ2				!value of objective function at X1 and X2

	PARAMETER TAU = 0.381966011		!(3-SQRT(5))/2
	ALPHA_U = AUPPER
	ALPHA_L = ALOWER

	ALPHA1 = (1.0 - TAU)*ALPHA_L + TAU*ALPHA_U
	CALL ALPHAX(X1,ALPHA1,X,S,NDV)		!{X1} = {X} + alpha1*{S}
	OBJ1 = EVAL(X1,NDV)

	ALPHA2 = TAU*ALPHA_L + (1.0 - TAU)*ALPHA_U
	CALL ALPHAX(X2,ALPHA2,X,S,NDV)		!{X2} = {X} + alpha2*{S}
	OBJ2 = EVAL(X2,NDV)

C	LOOP
	DO WHILE ( ALPHA_U-ALPHA_L .GT. ARES )
	  IF ( OBJ1 .GT. OBJ2 ) THEN
	    ALPHA_L = ALPHA1			!alpha1 becomes new lower bound
	    ALPHA1 = ALPHA2
	    OBJ1 = OBJ2
	    ALPHA2 = TAU*ALPHA_L + (1.0 - TAU)*ALPHA_U
	    CALL ALPHAX(X2,ALPHA2,X,S,NDV)	!{X2} = {X} + alpha1*{S}
	    OBJ2 = EVAL(X2,NDV)
	  ELSE
	    ALPHA_U = ALPHA2			!alpha2 becomes new lower bound
	    ALPHA2 = ALPHA1
	    OBJ2 = OBJ1
	    ALPHA1 = (1.0 - TAU)*ALPHA_L + TAU*ALPHA_U
	    CALL ALPHAX(X1,ALPHA1,X,S,NDV)	!{X1} = {X} + alpha1*{S}
	    OBJ1 = EVAL(X1,NDV)
	  ENDIF
	ENDDO

C  Calculate best alpha and return
	ALPHA = (ALPHA_U+ALPHA_L)/2.0
	CALL ALPHAX(X,ALPHA,X,S,NDV)		!{X} = {X} + alpha*{S}
	OBJ = EVAL(X,NDV)
	RETURN
	END
C------------------------------------------------------------------------------

	SUBROUTINE ONEDSEARCH(ALPHA,X,OBJ,S,NDV,STEP,MAXSTEP,ARES,EVAL)
C  Function:
C	Find the minimum of the objective function EVAL in direction S
C	Find alpha which minimises the following:
C
C		EVAL( {X} + alpha*{S} )
C
C	First the slope of the function in direction S is determined,
C	then steps are made in the downward direction until upper and
C	lower bounds are found, then golden section search is used to
C	narrow down the minimum point.
C
C Arguments:
C	ALPHA  - out.  ALPHA which minimizes EVAL in S direction.  Real scalar
C	               This is alpha for the unnormalized {S} vector, not
C	               the normalized {SN} vector used within ONEDSEARCH.
C	X      - modify.  On inupt, initial set of design variables.
C	               On output, final set of design variables which minimizes
C	               EVAL in S direction.  Real array of dimension NDV.
C	OBJ    - out.  Value of Objective function EVAL at returned X.  Real scalar.
C	S      - in.  Unnormalized unit vector defining the search direction
C	              to search in.  Real array of dimension NDV.
C	NDV    - in.  Number of Design Variables.  Integer dimension of
C	              arrays X and S.
C	ARES   - in.  Alpha resolution.  Real scalar.  Used by golden section
C	              search.  When the change in alpha is less than this
C	              amount, convergence is declared.
C	STEP   - in.  Used by FIND_BOUNDS.  Step size to use when searching
C	              for upper and lower bounds on the minimum.  Real Scalar
C	MAXSTEP- in.  Used by FIND_BOUNDS.  Real scalar.  Maximum number of
C	              steps to take before aborting.  Integer.
C	              If bounds on the minimum are not found within MAXSTEP
C	              steps, then execution aborts with an error message.
C	              This variable is intended to prevent an infinite
C	              loop condition.
C	EVAL   - in.  Function which evaluates the user's objective function
C	              and returns the functional result as a real scalar
C	              value.  This is declared as an EXTERNAL routine.
C	               The routine is called as follows:
C	                    RESULT = EVAL(X,NDV)
C	               where:
C	                    X - Array of design variables the objective
C	                        function is to be evaluated at.
C	                    NDV - Number of design variables.  The dimension
C	                          of array X.
C	                    RESULT - Real scalar.
C
C Outline:
C	1.  Normalize S vector  ( {SN} = normalized {S} )
C	2.  Call FIND_BOUNDS to find upper and lower bounds on the minimum
C	3.  Call GOLDEN_SECTION to zero in on the minimum
C	4.  Calculate alpha for unnormalized vector S  (alphan*{SN} = alpha*{S})
C
	IMPLICIT NONE
	INTEGER NDV,MAXSTEP
	REAL ALPHA,X(NDV),S(NDV),STEP,ARES,DS,OBJ
	EXTERNAL EVAL
	REAL EVAL

	PARAMETER MAXNDV = 4
	REAL AUPPER,ALOWER,SN(MAXNDV),SMAG

	CALL NORMALIZE(SN,SMAG,S,NDV)			!normalize S vector
	IF (SMAG .EQ. 0) RETURN				!zero vector
	CALL FIND_BOUNDS(AUPPER,ALOWER,X,SN,NDV,STEP,MAXSTEP,EVAL)
	CALL GOLDEN_SECTION(ALPHA,X,OBJ,AUPPER,ALOWER,SN,NDV,ARES,EVAL)
	ALPHA = ALPHA / SMAG			!alpha for unnormalized S

	RETURN
	END
C------------------------------------------------------------------------------

	SUBROUTINE GRADIENT(GRADF,X,NDV,DS,EVAL)
C  Function:
C	Calculate the gradient vector GRADF of function EVAL at point X
C	using finite difference steps of size DS.
C
C Arguments:
C	GRADF - out.  Gradient vector of function EVAL at point X
C	X      - in.  Point at which to determine gradient.  Real array of
C	              dimension NDV.
C	NDV    - in.  Number of Design Variables.  Integer dimension of
C	              arrays X and S.
C	DS     - in.  Delta step size.  A tiny step size used to determine
C	              the slope at a given point.
C	EVAL   - in.  Function which evaluates the user's objective function
C	              and returns the functional result as a real scalar
C	              value.  This is declared as an EXTERNAL routine.
C	               The routine is called as follows:
C	                    RESULT = EVAL(X,NDV)
C	               where:
C	                    X - Array of design variables the objective
C	                        function is to be evaluated at.
C	                    NDV - Number of design variables.  The dimension
C	                          of array X.
C	                    RESULT - Real scalar.
C
C  Outline:
C	For each array element, calculate the slope using finite differences:
C	            (X(i) - X(i-ds))
C	GRADF(i) = ----------------
C	                 ds
C
	IMPLICIT NONE
	INTEGER NDV
	REAL GRADF(NDV),X(NDV),DS
	EXTERNAL EVAL
	REAL EVAL

	PARAMETER MAXNDV = 4
	REAL X1(MAXNDV)
	INTEGER I

	DO I=1,NDV
	  CALL COPYVEC(X1,X,NDV)
	  X1(I) = X1(I) - DS
	  GRADF(I) = ( EVAL(X,NDV) - EVAL(X1,NDV) ) / DS
	ENDDO
	RETURN
	END
C------------------------------------------------------------------------------

	SUBROUTINE CDM(X,NDV,STEP,MAXSTEP,ARES,DS,MAXIT,EVAL)
C  Function:
C	Find the minimum of the objective function EVAL.
C	Find X which minimises the following:
C
C		EVAL( {X} )
C
C	This routine uses the conjugate-direction method of Fletcher and Reeves
C	to minimize an unconstrained function of NDV variables.
C
C Arguments:
C	X      - modify.  Real array of dimension NDV.
C	              On input this is the initial set of design variables.
C	              On output this is the best set of design variables
C	              which minimizes the function EVAL.
C	NDV    - in.  Number of Design Variables.  Integer dimension of
C	              arrays XI and S.
C	STEP   - in.  Used by FIND_BOUNDS.  Step size to use when searching
C	              for upper and lower bounds on the minimum.  Real Scalar
C	MAXSTEP- in.  Used by FIND_BOUNDS.  Real scalar.  Maximum number of
C	              steps to take before aborting.  Integer.
C	              If bounds on the minimum are not found within MAXSTEP
C	              steps, then execution aborts with an error message.
C	              This variable is intended to prevent an infinite
C	              loop condition.
C	ARES   - in.  Alpha resolution.  Real scalar.  Used by golden section
C	              search.  When the change in alpha is less than this
C	              amount, convergence is declared.
C	DS     - in.  Delta step size.  A tiny step size used to determine
C	              the slope at a given point.
C	MAXIT  - in.  Maximum number of Conjugate Gradient Iterations to take
C	              before aborting.  Integer scalar.  If convergence is
C	              not obtained within MAXIT iterations, then execution
C	              terminates anyway.  This variable is intended to
C	              prevent an infinite loop condition.
C	EVAL   - in.  Function which evaluates the user's objective function
C	              and returns the functional result as a real scalar
C	              value.  This is declared as an EXTERNAL routine.
C	               The routine is called as follows:
C	                    RESULT = EVAL(X,NDV)
C	               where:
C	                    X - Array of design variables the objective
C	                        function is to be evaluated at.
C	                    NDV - Number of design variables.  The dimension
C	                          of array X.
C	                    RESULT - Real scalar.
C
C  Local variables:
C	S      - in.  Normalized unit vector defining the search direction
C	              to search in.  Real array of dimension NDV.
C
C Outline:
C	1.  Start with X = X initial
C	3.  Calculate gradient at {X}  ( {GRAD F} = (dF(X)/dX) )
C	4.  Save a = {GRAD F} dot {GRAD F}
C	5.  Calculate search direction {S}   ( {S} = - {GRAD F} )
C	6.  Search for minimum in S direction
C	      Find alpha which minimizes F( {X} + alpha*{S} )
C	7.  Check for convergence criteria.
C       !!Exit if alpha < ARES (alpha resolution)
C	8.  Calculate new {X}:  {X} = {X} + alpha*{S}  (returned by ONEDSEARCH)
C	9.  Calculate gradient at new {X}  ( {GRAD F} = (dF(X)/dX) )
C	10. Save b = {GRAD F} dot {GRAD F}
C	11. Calculate BETA:  BETA = b/a
C	12. Calculate new search direction {S}
C	      {S}new = - {GRAD F} + beta*{S}old
C	13. Save a = b
C	14. Slope = {S} dot {GRAD F}
C	15. If Slope .GE. 0 goto 4
C	16. Else Search for minimum in S direction
C	      Find alpha which minimizes F( {X} + alpha*{S} )
C	17. Check for convergence
C	      Exit if converged.
C	      Loop to 8 if not converged.
C
	IMPLICIT NONE
	INTEGER NDV,MAXSTEP,MAXIT
	REAL X(NDV),STEP,ARES,DS
	EXTERNAL EVAL
	REAL EVAL

	PARAMETER MAXNDV = 4
	REAL S(MAXNDV),GRADF(MAXNDV),TMP(MAXNDV),A,B,ALPHA,BETA,SLOPE,OBJ
	INTEGER ITERATION

C  Write initial objective function value at initial X
	OBJ = EVAL(X,NDV)
	WRITE(6,100) OBJ
	WRITE(6,101) X
100	FORMAT(' INITIAL OBJECTIVE FUNCTION: ',F)
101	FORMAT( F )

	CALL GRADIENT(GRADF,X,NDV,DS,EVAL)	!3.  Calculate gradient at {X}
	CALL DOT(A,GRADF,GRADF,NDV)		!4.  a = {GRAD F} dot {GRAD F}
	CALL SCALARMULT(S,GRADF,-1.0,NDV)	!5.  {S} = - {GRADF}

C LOOP
	DO ITERATION = 1,MAXIT

C  6.  Search for minimum in S direction
	  CALL ONEDSEARCH(ALPHA,X,OBJ,S,NDV,STEP,MAXSTEP,ARES,EVAL)
	  WRITE(6,102) ITERATION, OBJ		!Write out new values
	  WRITE(6,101) X
102	  FORMAT(' ITERATION NUMBER: ', I4, '   OBJECTIVE FUNCTION: ',F)

	  CALL GRADIENT(GRADF,X,NDV,DS,EVAL)	!9.  gradient at new {X}
	  CALL DOT(B,GRADF,GRADF,NDV)		!10. b = {GRAD F} dot {GRAD F}
	  BETA = B/A				!11. BETA = b/a

C  12.    Calculate new search direction {S}:
C	  {S}new = - {GRAD F} + beta*{S}old
C	  Except every NDVth step we start over with steepest descent

	  IF ( MOD(ITERATION,NDV) .EQ. 0) THEN	!then start over with steepest descent
	    CALL GRADIENT(GRADF,X,NDV,DS,EVAL)	!3.  Calculate gradient at {X}
	    CALL DOT(A,GRADF,GRADF,NDV)		!4.  a = {GRAD F} dot {GRAD F}
	    CALL SCALARMULT(S,GRADF,-1.0,NDV)	!5.  {S} = - {GRADF}
	  ELSE
	    CALL SCALARMULT(TMP,S,BETA,NDV)	!{tmp} = beta*{S}
	    CALL SCALARMULT(S,GRADF,-1.0,NDV)	!{S} = - {GRAD F}
	    CALL ADDVEC(S,S,TMP,NDV)		!{S} = - {GRAD F} + beta*{S}
	    A = B				!13.  Save a = b
	    CALL DOT(SLOPE,S,GRADF,NDV)		!14.  Slope = {S} dot {GRAD F}
	    IF (SLOPE .GE. 0) THEN		!15.
	      CALL DOT(A,GRADF,GRADF,NDV)	!goto 4
	      CALL SCALARMULT(S,GRADF,-1.0,NDV)
	    ENDIF
	  ENDIF

	ENDDO					!goto 6

	RETURN
	END
C------------------------------------------------------------------------------

	SUBROUTINE POWELL(X,NDV,STEP,MAXSTEP,ARES,MAXIT,EVAL)
C  Function:
C	Find the minimum of the objective function EVAL.
C	Find X which minimises the following:
C
C		EVAL( {X} )
C
C	This routine uses Powell's zero order method to minimize an
C	unconstrained function of NDV variables.  (Zero order means
C	it does not use gradients.  It does not check the slope of
C	the function at any point.  It does not use first-order
C	derivatives.)
C
C Arguments:
C	X      - modify.  Real array of dimension NDV.
C	              On input this is the initial set of design variables.
C	              On output this is the best set of design variables
C	              which minimizes the function EVAL.
C	NDV    - in.  Number of Design Variables.  Integer dimension of
C	              arrays XI and S.
C	STEP   - in.  Used by FIND_BOUNDS.  Step size to use when searching
C	              for upper and lower bounds on the minimum.  Real Scalar
C	MAXSTEP- in.  Used by FIND_BOUNDS.  Real scalar.  Maximum number of
C	              steps to take before aborting.  Integer.
C	              If bounds on the minimum are not found within MAXSTEP
C	              steps, then execution aborts with an error message.
C	              This variable is intended to prevent an infinite
C	              loop condition.
C	ARES   - in.  Alpha resolution.  Real scalar.  Used by golden section
C	              search.  When the change in alpha is less than this
C	              amount, convergence is declared.
C	MAXIT  - in.  Maximum number of Conjugate Gradient Iterations to take
C	              before aborting.  Integer scalar.  If convergence is
C	              not obtained within MAXIT iterations, then execution
C	              terminates anyway.  This variable is intended to
C	              prevent an infinite loop condition.
C	EVAL   - in.  Function which evaluates the user's objective function
C	              and returns the functional result as a real scalar
C	              value.  This is declared as an EXTERNAL routine.
C	               The routine is called as follows:
C	                    RESULT = EVAL(X,NDV)
C	               where:
C	                    X - Array of design variables the objective
C	                        function is to be evaluated at.
C	                    NDV - Number of design variables.  The dimension
C	                          of array X.
C	                    RESULT - Real scalar.
C
C  Local variables:
C	S             Normalized unit vector defining the search direction
C	              to search in.  Real array of dimension NDV.
C	H             Square matrix of search direction columns.  The name
C	              H is chosen by convention because of its approximation
C	              to the Hessian matrix.  Real array of dimension NDV x NDV.
C
C Outline:
C	1.  Start with {X} = X initial
C	2.  Initialize [H] to identity matrix
C	3.  Save {Y} = {X}
C	4.  do n=1,ndv
C	4a.   search direction {S}  = nth column of H
C	4b.   Search for minimum in {S} direction
C	        Find alpha which minimizes F( {X} + alpha*{S} )
C	        Set new {X} = {X} + alpha*{S}  (returned by ONEDSEARCH)
C	4c. enddo
C	5.  Conjugate search direction {S} = {X} - {Y}
C	6.  Search for minimum in {S} direction
C	        Find alpha which minimizes F( {X} + alpha*{S} )
C	        Set new {X} = {X} + alpha*{S}  (returned by ONEDSEARCH)
C	7.  Check for convergence criteria.
C         !!Exit if alpha < ARES (alpha resolution)
C	        Exit if ITERNATION = MAXIT
C	8.  Shift H matrix left
C	        discard 1st column of H, shift columns left one,
C	        set last column of H = alpha*{S}
C	9.  goto 4.
C
	IMPLICIT NONE
	INTEGER NDV,MAXSTEP,MAXIT
	REAL X(NDV),STEP,ARES
	EXTERNAL EVAL
	REAL EVAL

	PARAMETER MAXNDV = 4
	REAL H(MAXNDV,MAXNDV),S(MAXNDV),Y(MAXNDV),TMP(MAXNDV),OBJ,ALPHA
	INTEGER RESTART,ITERATION,I,J,N

C  Write initial objective function value at initial X
	OBJ = EVAL(X,NDV)
	WRITE(6,100) OBJ
	WRITE(6,101) X
100	FORMAT(' INITIAL OBJECTIVE FUNCTION (EVAL): ',F)
101	FORMAT( F )

	DO RESTART=1,MAXIT
	WRITE(6,119), RESTART
119	FORMAT(' SET [H] MATRIX TO IDENTITY.  RESTART #',I4)

C	2.  Initialize [H] to identity matrix
	DO J=1,NDV
	  DO I=1,NDV
	    IF (I.EQ.J) THEN
	      H(I,J) = 1.0
	    ELSE
	      H(I,J) = 0.0
	    ENDIF
	  ENDDO
	ENDDO

C	3.  Save {Y} = {X}
	CALL COPYVEC(Y,X,NDV)

	DO ITERATION=1,NDV
C	4.  do n=1,ndv
C	4a.   search direction {S}  = nth column of H
C	4b.   Search for minimum in {S} direction
C	        Find alpha which minimizes F( {X} + alpha*{S} )
C	        Set new {X} = {X} + alpha*{S}  (returned by ONEDSEARCH)
C	4c. enddo
	DO N=1,NDV
	  CALL COLVECH(S,N,H,NDV,MAXNDV)		!{S} = nth column of H
	  CALL ONEDSEARCH(ALPHA,X,OBJ,S,NDV,STEP,MAXSTEP,ARES,EVAL)
	  WRITE(6,102) ITERATION, N, OBJ		!Write out new values
	  WRITE(6,101) X
102	  FORMAT(' ITERATION NUMBER: ', I4,
	1	' ORTHOGONAL DIRECTION: ',I2, ' EVAL=',F)
	ENDDO

C	5.  Conjugate search direction {S} = {X} - {Y}
	CALL SCALARMULT(S,Y,-1.0,NDV)		!{S} = -{Y}
	CALL ADDVEC(S,X,S,NDV)			!{S} = {X} + -{Y}

C	6.  Search for minimum in {S} direction
C	        Find alpha which minimizes F( {X} + alpha*{S} )
C	        Set new {X} = {X} + alpha*{S}  (returned by ONEDSEARCH)
	CALL ONEDSEARCH(ALPHA,X,OBJ,S,NDV,STEP,MAXSTEP,ARES,EVAL)
	WRITE(6,103) ITERATION, OBJ		!Write out new values
	WRITE(6,101) X
103	FORMAT(' ITERATION NUMBER: ', I4,
	1	' CONJUGATE DIRECTION SEARCH.  EVAL=',F)

C	8.  Shift H matrix left
C	        discard 1st column of H, shift columns left one,
C	        set last column of H = alpha*{S}
	CALL MSHIFTL(H,NDV)
	DO J=1,NDV
	  H(NDV,J) = alpha*S(J)
	ENDDO

	ENDDO	!do iteration=1,ndv
	ENDDO	!do restart=1,maxit
	RETURN
	END


      INTEGER FUNCTION FCNPRIM( N )
C  Input number N, return index of next prime number
      INTEGER N
      INTEGER NUMPRIMES
      PARAMETER (NUMPRIMES = 168)
      INTEGER PRIME(NUMPRIMES)
      DATA PRIME /
     *   2,   3,   5,   7,  11,  13,  17,  19,  23,  29,  31,  37,  41,
     *  43,  47,  53,  59,  61,  67,  71,  73,  79,  83,  89,  97, 101,
     * 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167,
     * 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239,
     * 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313,
     * 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397,
     * 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467,
     * 479, 487, 491, 499, 503, 509, 521, 523, 541, 547, 557, 563, 569,
     * 571, 577, 587, 593, 599, 601, 607, 613, 617, 619, 631, 641, 643,
     * 647, 653, 659, 661, 673, 677, 683, 691, 701, 709, 719, 727, 733,
     * 739, 743, 751, 757, 761, 769, 773, 787, 797, 809, 811, 821, 823,
     * 827, 829, 839, 853, 857, 859, 863, 877, 881, 883, 887, 907, 911,
     * 919, 929, 937, 941, 947, 953, 967, 971, 977, 983, 991, 997 /

      DO I=1,168
        IF (PRIME(I) .GE. N) THEN
          FCNPRIM = I
          RETURN
        ENDIF
      ENDDO
      STOP 'FCNPRIM error'
      END


Back