<< . .

. 31
( : 32)



. . >>


, A new algorithm for unconstrained optimization, in Nonlinear Programming, J. B.
[219]
Rosen, O. L. Mangasarian, and K. Ritter, eds., Academic Press, New York, 1970, pp. 31“
65.

, Convergence properties of a class of minimization algorithms, in Nonlinear Pro-
[220]
gramming 2, O. L. Mangasarian, R. R. Meyer, and S. M. Robinson, eds., Academic Press,
New York, 1975, pp. 1“27.

, Some global convergence properties of a variable metric algorithm without ex-
[221]
act line searches, in Nonlinear Programming, R. Cottle and C. Lemke, eds., American
Mathematical Society, Providence, RI, 1976, pp. 53“72.

, Nonconvex minimization calculations and the conjugate gradient method, Lecture
[222]
Notes in Mathematics 1066, Springer-Verlag, Berlin, (1984), pp. 122“141.

, On the global convergence of trust region algorithms for unconstrained minimiza-
[223]
tion, Math. Programming., 29 (1984), pp. 297“303.

, Convergence properties of algorithms for nonlinear optimization, SIAM Rev., 28
[224]
(1986), pp. 487“500.

, How bad are the BFGS and DFP methods when the objective function is quadratic,
[225]
Math. Programming., 34 (1986), pp. 34“47.

, Update conjugate directions by the BFGS formula, Math. Programming, 38 (1987),
[226]
pp. 29“46.

, Direct search algorithms for optimization calculations, Acta Numerica, 7 (1998),
[227]
pp. 287“336.

[228] K. Radhakrishnan and A. C. Hindmarsh, Description and Use of LSODE, the Liver-
more Solver for Ordinary Differential Equations, Tech. Rep. URCL-ID-113855, Lawrence
Livermore National Laboratory, Livermore, CA, December 1993.


Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.
Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




174 BIBLIOGRAPHY

[229] W. Rudin, Principles of Mathematical Analysis, McGraw-Hill, New York, 1953.
[230] J. Sacks, W. J. Welch, T. J. Mitchell, and H. P. Wynn, Designs and analysis of
computer experiments, Statist. Sci., 4 (1989), pp. 409“435.
[231] R. B. Schnabel and E. Eskow, A new modi¬ed Cholesky factorization, SIAM J. Sci.
Statist. Comput., 11 (1990), pp. 1136“1158.
[232] G. A. Schultz, R. B. Schnabel, and R. H. Byrd, A family of trust-region-based algo-
rithms for unconstrained minimization with strong global convergence properties, SIAM
J. Numer. Anal., 22 (1985), pp. 47“67.
[233] V. E. Shamanskii, A modi¬cation of Newton™s method, Ukrain. Mat. Zh., 19 (1967),
pp. 133“138 (in Russian).
[234] L. F. Shampine, Implementation of implicit formulas for the solution of ODEs, SIAM J.
Sci. Statist. Comput., 1 (1980), pp. 103“118.
, Numerical Solution of Ordinary Differential Equations, Chapman and Hall, New
[235]
York, 1994.
[236] L. F. Shampine and M. W. Reichelt, The MATLAB ODE suite, SIAM J. Sci. Comput.,
18 (1997), pp. 1“22.
[237] D. F. Shanno, Conditioning of quasi-Newton methods for function minimization, Math.
Comp., 24 (1970), pp. 647“657.
, On the variable metric methods for sparse Hessians, Math. Comp., 34 (1980),
[238]
pp. 499“514.
[239] J. Sherman and W. J. Morrison, Adjustment of an inverse matrix corresponding to
changes in the elements of a given column or a given row of the original matrix (abstract),
Ann. Math. Statist., 20 (1949), p. 621.
, Adjustment of an inverse matrix corresponding to a change in one element of a
[240]
given matrix, Ann. Math. Statist., 21 (1950), pp. 124“127.
[241] B. O. Shubert, A sequential method seeking the global maximum of a function, SIAM
J. Numer. Anal., 9 (1972), pp. 379“388.
[242] D. C. Sorensen, Newton™s method with a model trust region modi¬cation, SIAM J.
Numer. Anal., 19 (1982), pp. 409“426.
, Minimization of a large-scale quadratic function subject to a spherical constraint,
[243]
SIAM J. Optim., 7 (1997), pp. 141“161.
[244] W. Spendley, G. R. Hext, and F. R. Himsworth, Sequential application of simplex
designs in optimisation and evolutionary operation, Technometrics, 4 (1962), pp. 441“
461.
[245] W. Squire and G. Trapp, Using complex variables to estimate derivatives of real func-
tions, SIAM Rev., 40 (1998), pp. 110“112.
[246] M. Srinivas and L. M. Patnaik, Genetic algorithms: A survey, Computer, 27 (1994),
pp. 17“27.
[247] T. Steihaug, The conjugate gradient method and trust regions in large scale optimization,
SIAM J. Numer. Anal., 20 (1983), pp. 626“637.


Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.
Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




BIBLIOGRAPHY 175

[248] C. P. Stephens and W. Baritompa, Global optimization requires global information, J.
Optim. Theory Appl., 96 (1998), pp. 575“588.

[249] G. W. Stewart, Introduction to Matrix Computations, Academic Press, New York, 1973.

[250] D. Stoneking, G. Bilbro, R. Trew, P. Gilmore, and C. T. Kelley, Yield optimiza-
tion using a GaAs process simulator coupled to a physical device model, IEEE Trans.
Microwave Theory and Techniques, 40 (1992), pp. 1353“1363.

[251] D. E. Stoneking, G. L. Bilbro, R. J. Trew, P. Gilmore, and C. T. Kelley, Yield
optimization using a GaAs process simulator coupled to a physical device model, in Proc.
IEEE/Cornell Conference on Advanced Concepts in High Speed Devices and Circuits,
IEEE, Piscataway, NJ, 1991, pp. 374“383.

[252] K. Tababe, Continuous Newton-Raphson method for solving an underdetermined system
of nonlinear equations, J. Nonlinear Anal., Theory Methods Appl., 3 (1979), pp. 495“503.

, Global analysis of continuous analogs of the Levenberg-Marquardt and Newton-
[253]
Raphson methods for solving nonlinear equations, Ann. Inst. Statist. Math., B, 37 (1985),
pp. 189“203.

[254] T. Tian and J. C. Dunn, On the gradient projection method for optimal control problems
with nonnegative L2 inputs, SIAM J. Control Optim., 32 (1994), pp. 516“537.

[255] P. L. Toint, On sparse and symmetric matrix updating subject to a linear equation, Math.
Comp., 31 (1977), pp. 954“961.

, On the superlinear convergence of an algorithm for solving a sparse minimization
[256]
problem, SIAM J. Numer. Anal., 16 (1979), pp. 1036“1045.

, Towards an ef¬cient sparsity exploiting Newton method for minimization, in Sparse
[257]
Matrices and Their Uses, I. S. Duff, ed., Academic Press, New York, 1981, pp. 57“88.

, On large scale nonlinear least squares calculations, SIAM J. Sci. Statist. Comput.,
[258]
8 (1987), pp. 416“435.

, Global convergence of a class of trust“region methods for nonconvex minimization
[259]
in Hilbert space, IMA J. Numer. Anal., 8 (1988), pp. 231“252.

[260] V. Torczon. Private communication, 1997.

, Multidirectional Search, Ph.D. thesis, Rice University, Houston, TX, 1989.
[261]

, On the convergence of the multidimensional search algorithm, SIAM J. Optim., 1
[262]
(1991), pp. 123“145.

, On the convergence of pattern search algorithms, SIAM J. Optim., 7 (1997),
[263]
pp. 1“25.

[264] L. N. Trefethen and D. Bau, Numerical Linear Algebra, SIAM, Philadelphia, 1996.

[265] P. van Laarhoven and E. Aarts, Simulated Annealing, Theory and Practice, Kluwer,
Dordrecht, the Netherlands, 1987.

[266] L. N. Vicente, Trust-Region Interior-Point Algorithms for a Class of Nonlinear Program-
ming Problems, Ph.D. thesis, Rice University, Houston, TX, 1996.


Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.
Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




176 BIBLIOGRAPHY

[267] F. H. Walters, L. R. Parker, S. L. Morgan, and S. N. Demming, Sequential Simplex
Optimization, CRC Press, Boca Raton, FL, 1991.
[268] B. Watson, Quasi-Newton Methoden f¨ r Minimierungsprobleme mit strukturierter
u
Hesse-Matrix, Diploma Thesis, Universit¨ t Trier, 1990.
a
¨
[269] J. Werner, Uber die globale konvergenz von Variable-Metric Verfahren mit nichtexakter
Schrittweitenbestimmung, Numer. Math., 31 (1978), pp. 321“334.

[270] T. A. Winslow, R. J. Trew, P. Gilmore, and C. T. Kelley, Doping pro¬les for optimum
class B performance of GaAs mesfet ampli¬ers, in Proc. IEEE/Cornell Conference on
Advanced Concepts in High Speed Devices and Circuits, IEEE, Piscataway, NJ, 1991,
pp. 188“197.

, Simulated performance optimization of GaAs MESFET ampli¬ers, in Proc.
[271]
IEEE/Cornell Conference on Advanced Concepts in High Speed Devices and Circuits,
IEEE, Piscataway, NJ, 1991, pp. 393“402.
[272] P. Wolfe, Convergence conditions for ascent methods, SIAM Rev., 11 (1969), pp. 226“
235.

, Convergence conditions for ascent methods II: Some corrections, SIAM Rev., 13
[273]
(1971), pp. 185“188.
[274] M. H. Wright, Direct search methods: Once scorned, now respectable, in Numerical
Analysis 1995, Proc. 1995 Dundee Bienneal Conference in Numerical Analysis, D. F.
Grif¬ths and G.A. Watson, eds., 1996, Addison“Wesley Longman, Harlow, U.K., pp. 191“
208.
[275] S. J. Wright, Compact storage of Broyden-class quasi-Newton matrices, Argonne Na-
tional Laboratory, Argone, IL, 1994, preprint.

[276] S. J. Wright and J. N. Holt, An inexact Levenbert-Marquardt method for large sparse
nonlinear least squares, J. Austral. Math. Soc. Ser. B, 26 (1985), pp. 387“403.

[277] Z. Wu, The effective energy transformation scheme as a special continuation approach
to global optimization with application to molecular conformation, SIAM J. Optim., 6
(1996), pp. 748“768.
[278] T. J. Ypma, The effect of rounding errors on Newton-like methods, IMA J. Numer. Anal.,
3 (1983), pp. 109“118.
[279] S. K. Zavriev, On the global optimization properties of ¬nite-difference local descent
algorithms, J. Global Optim., 3 (1993), pp. 67“78.
[280] C. Zhu, R. H. Byrd, P. Lu, and J. Nocedal, L-BFGS-B”FORTRAN subroutines for
large-scale bound constrained optimization, ACM Trans. Math. Software, 23 (1997),
pp. 550“560.




Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.
Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




Index

-active set, 97 chord method, 19
-inactive set, 97 Newton™s method, 15
Convex quadratic, 6, 45
Actual reduction, 40 Coordinate descent algorithm, 146
trust region, 51 Critical point, 5
Adjoint equation, 10
Adjoint variable, 10 DACE, 149
Armijo rule, 39, 78 Default choice, 82
Dennis“Mor´ condition, 76
e
Backtracking, 39 Descent direction, 40
BFGS DFP update, 81
algorithm, 80 Difference approximation
global convergence, 78 Hessian, 20
implementation, 78 Hessian“vector product, 20
limited memory, 79 DIRECT
local convergence, 72 method, 135
modi¬ed update, 80 Direct search algorithms, 135
restarted, 79 Direction of negative curvature, 9, 30
update, 71 Dogleg, 58
Bounded deterioration, 74 classical, 58
Breakdown
conjugate gradient, 7 Feasible
point, 87
Cauchy decrease, 62 set, 87
Cauchy point, 55 Fletcher“Reeves formula, 49
CG-Trust region optimizer, 65 Forcing term, 28
CGTRUST Forward difference CG
algorithm, 65 algorithm, 30
Cholesky factorization, 7, 16 Forward difference PCG
Chord method, 19 algorithm, 31
algorithm, 19 Frobenius norm, 77
Classical dogleg Fundamental theorem of calculus, 4
algorithm, 63
Conjugate gradient, 7, 28 Gauss“Newton
algorithm, 7 damped, 47
Constrained optimization problem, 3 Gauss“Newton method, 23
Constraints, 87 Genetic algorithms, 112
active, 87 Global convergence
inactive, 87 Armijo rule, 43
Control variable, 10 BFGS“Armijo, 78
Convergence dogleg trust region, 52
r-type, 14 gradient projection method, 95
Convergence theorem projected BFGS method, 104

177
Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.
Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




178 INDEX

scaled srojected gradient, 99 inexact Newton method, 29, 30
Global minimization problem, 3 Newton™s method, 16
Global minimizer, 3 Projected BFGS method, 104
Globally convergent algorithm, 39 Projected Newton method, 101
Gradient of f , 4 q-type, 13
Gradient Projection Local improvement, 18
algorithm, 91 Local linear model
Gradient projection, 91 for equations, 14
Local minimizer, 3
Hessian of f , 4 nondegenerate, 90
Hooke“Jeeves Local quadratic model
algorithm, 135, 145 exact Hessians, 15
convergence, 148
exploratory move, 145 MDS, 135, 143
pattern move, 146 convergence, 145
Measure of stationarity, 93
Implicit ¬ltering
Minimizer
restart, 127
global, 3
scales, 124
local, 3
Inde¬nite matrix, 4
Minimum, 3
Indices
Minimum at all scales, 127
active, 87
Minimum norm solution, 23, 25
inactive, 87
Model Hessian, 40
Inexact Newton
Moore“Penrose inverse, 26
method, 28
Multidirectional search, 135, 143
step, 28
convergence, 145
Initial
guess, 4
Necessary condition
iterate, 4
¬rst-order, 5
Inner iteration, 28
Necessary conditions
Interval methods, 112
bound constraints
one variable, 88
Krylov subspace, 8
¬rst-order
bound constraints, 88
Large residual problem, 22
for optimality, 5
Lennard“Jones function, 120
Negative curvature, 9, 30, 36
Levenberg“Marquardt
Nelder“Mead
method, 47
oriented restart, 141
parameter, 47, 56
simplex method, 135
Line search, 39
stagnation
actual reduction, 40
detection, 141
exact, 41, 86
Newtcg
predicted reduction, 40
algorithm, 32
Line search algorithm, 40
Newton step, 15
Linear model, 14, 40
Newton™s method, 14
Lipschitz
algorithm, 16
constant, 14
Newton-iterative methods, 28
continuity, 14
Nondegenerate local minimizer, 90
Local convergence
Nonlinear conjugate gradient
BFGS, 72
algorithm, 48
chord method, 19
implicit ¬ltering, 124 convergence, 49


Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.
Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




INDEX 179

Fletcher“Reeves, 49 Secant method
Polak“Ribi` re, 49
e classical, 71
Nonlinear least squares, 11, 22 Sensitivity equations, 11, 12
Jacobian, 22 Shamanskii method, 20
overdetermined, 22 Sherman“Morrison
residual, 22 formula, 81
underdetermined, 22 Shubert algorithm, 149
Ntrust Simple decrease, 40
algorithm, 63 Simplex
condition, 112
Objective function, 3 convex hull, 112
Optimality de¬nition, 112
necessary conditions, 5 diameter, 112
suf¬cient conditions, 6 directions, 112
Optimization problem gradient
constrained, 3 forward, 113
easiest, 12 nonsingular, 112
unconstrained, 3 re¬‚ected, 115
Oriented lengths, 112 vertex, 112
Outer iteration, 28 Simplex gradient
central, 115
Parameter identi¬cation, 11 Simulated annealing, 112
Polak“Ribi` re formula, 49
e Singular value decomposition, 25

<< . .

. 31
( : 32)



. . >>