<< . .

. 28
( : 32)



. . >>

10




’6
10




’8
10




’10
10
0 100 200 300 400 500 600
function evaluations


Figure 8.17: Parameter ID, tol = 10’6 , x0 = (5, 5)T




Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.
Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




156 ITERATIVE METHODS FOR OPTIMIZATION



2
10




1
10




0
10
function value




’1
10




’2
10




’3
10
0 100 200 300 400 500 600
function evaluations


Figure 8.18: Parameter ID, tol = 10’3 , x0 = (5.1, 5.3)T




2
10

1
10

0
10

’1
10

’2
10
function value




’3
10

’4
10

’5
10

’6
10

’7
10

’8
10
0 100 200 300 400 500 600
function evaluations


Figure 8.19: Parameter ID, tol = 10’6 , x0 = (5.1, 5.3)T




Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.
Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




SEARCH ALGORITHMS 157



2
10

1
10

0
10

’1
10

’2
10
function value




’3
10

’4
10

’5
10

’6
10

’7
10

’8
10
0 50 100 150 200 250 300 350 400
function evaluations


Figure 8.20: Unperturbed Quadratic, N = 4




4
10




2
10




0
10
function value




’2
10




’4
10




’6
10




’8
10
0 500 1000 1500 2000 2500 3000 3500
function evaluations


Figure 8.21: Unperturbed Quadratic, N = 32




Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.
Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




158 ITERATIVE METHODS FOR OPTIMIZATION



2
10



1
10



0
10



’1
function value




10



’2
10



’3
10



’4
10



’5
10
0 200 400 600 800 1000 1200 1400 1600 1800 2000
function evaluations


Figure 8.22: Perturbed Quadratic, N = 4




3
10



2
10



1
10



0
function value




10



’1
10



’2
10



’3
10



’4
10
0 500 1000 1500 2000 2500 3000 3500
function evaluations


Figure 8.23: Perturbed Quadratic, N = 32




Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.
Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




SEARCH ALGORITHMS 159

8.6 Exercises on Search Algorithms
8.6.1. Let Sl for 1 ¤ l ¤ 3 be the simplices having one vertex at (xl )1 = (10, 10, 10, 10) and
direction vectors V l given by

V 1 = diag(1, 2, 3, 4), V2 = diag(4, 3, 2, 1), V3 = diag(2, 2, 2, 2).

For each l = 1, 2, 3, apply the Nelder“Mead algorithm to the function f de¬ned for x ∈ R4
by
(x1 ’ x2 x3 x4 )2 + (x2 ’ x3 x4 )2 + (x3 ’ x4 )2 + x24

with the initial simplex V l . What happened? This example is one of Nelder™s favorites
[203].
8.6.2. Show that if the set {x | f (x) ¤ f (x0 )} is bounded and S0 is either an equilateral or a
1
right simplex, then (8.9) holds.
8.6.3. One can modify MDS [260] by eliminating the expansion step and only computing re-
¬‚ected points until one is found that is better than x1 . If no re¬‚ected points are better,
then perform a contraction step. Prove that Theorem 8.2.1 holds for this implementation.
Implement MDS in this way and compare it with Algorithm mds. Are the savings in calls
to f for each iterate realized in a savings for the entire optimization?
8.6.4. The easiest problem in optimization is to minimize xT x. Give the algorithms in this
section a chance to show what they can do by using them to solve this problem. Try several
initial iterates (or initial simplices/patterns) and several problem dimensions (especially
N = 8, 16, 32).

8.6.5. The search methods in this section impose a structure on the sampling and thereby hope
to ¬nd a useful optimal point far more ef¬ciently than using an unstructured deterministic
or random search. Implement an unstructured search and use your algorithm to minimize
xT x when N = 2. For an example of such a method, take the one from [6], please.

8.6.6. The Spendley, Hext, and Himsworth algorithm [244] manages the simplices in a very
different way from those we™ve discussed in the text. Use the information in [244] and
[267] to implement this algorithm. Use Theorem 6.2.9 to prove convergence for N = 2.
What happens to both your implementation and analysis when you try N = 3 or arbitrary
N ? Explain Table 5 in [244].

8.6.7. Use any means necessary to solve the Lennard“Jones problem. Have your results improved
since you tried exercises 6.4.3 and 7.7.4?




Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.
Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.
Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




Bibliography

[1] E. Aarts and J. Korst, Simulated Annealing and Boltzmann Machines, Wiley, New
York, 1989.
[2] L. Adams and J. L. Nazareth, eds., Linear and Nonlinear Conjugate Gradient-Related
Methods, SIAM, Philadelphia, 1996.
[3] M. Al-Baali, Descent property and global convergence of the Fletcher-Reeves method
with inexact line searches, IMA J. Numer. Anal., 5 (1985), pp. 121“124.

<< . .

. 28
( : 32)



. . >>