<< . .

. 23
( : 32)



. . >>

10 10
0 50 100 150 200 0 50 100 150 200
function evaluations function evaluations

Forward Differences Forward Differences
2
10 70

60
simplex gradient norm




0
10
function value

50
2
10 40

30
4
10
20
6
10 10
0 50 100 150 200 250 0 50 100 150 200 250
function evaluations function evaluations


Figure 7.3: Third Weber Example

differencing. This, of course, is consistent with the theory, which does not claim that implicit
¬ltering is a global optimizer.

7.6.2 Parameter ID
We consider the parameter ID example from §1.6.2 using the data from §2.6.1. Recall that in this
example we use as data the values of the exact solution for c = k = 1 at the points ti = i/100
for 1 ¤ i ¤ 100. The initial iterate was (5, 5)T ; the sequence of scales was {2’k }12 . Implicit
k=1
¬ltering, like the globally convergent algorithms in the ¬rst part of the book, is fairly insensitive
to the choice of initial iterate, as we will see when we revisit this example in §8.5.2.
We report on both low (rtol = atol = 10’3 , Figure 7.4) and high (rtol = atol = 10’6 ,
Figure 7.5) accuracy computations. Note that after 200 function evaluations the function re-
duction from the central difference BFGS form of implicit ¬ltering ¬‚attens out in both plots at
roughly the expected level of O(tol) while the other methods have not. This effect, which is
not uncommon, is one reason for our preference for the BFGS central difference form of the
algorithm.

7.6.3 Convex Quadratics
The performance of the central difference BFGS form of implicit ¬ltering should be very good,
since (see exercises 7.7.1 and 7.7.2) the difference approximation of the gradient is exact. We
would expect that good performance to persist in the perturbed case. We illustrate this with results
on two problems, both given by (6.12). One is an unperturbed problem (aj = bj = cj = 0
for all j) where H is a diagonal matrix with (H)ii = 1/(2i) for 1 ¤ i ¤ N . The other is a
perturbed problem with
ξ0 = (sin(1), sin(2), . . . , sin(N ))T , ξ1 = 0, ξ2 = (1, . . . , 1)T ,

a1 = a2 = .01, a3 = 0, b1 = (1, . . . , 1)T , b2 = 0, and c1 = c2 = 10π.


Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.
Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




130 ITERATIVE METHODS FOR OPTIMIZATION



Central Differences Central Differences
4 2
10 10
simplex gradient norm


2
10
0




function value
10
0
10
2
10
2
10


4 4
10 10
0 50 100 150 200 250 0 50 100 150 200 250
function evaluations function evaluations

Forward Differences Forward Differences
5 2
10 10
simplex gradient norm




0 0
function value
10 10



5 2
10 10



10 4
10 10
0 50 100 150 200 250 0 50 100 150 200 250
function evaluations function evaluations


Figure 7.4: Parameter ID, tol = 10’3




Central Differences Central Differences
4 2
10 10
simplex gradient norm




0
10
2
function value




10
2
10
0
10
4
10


2 6
10 10
0 50 100 150 200 250 0 50 100 150 200 250
function evaluations function evaluations

Forward Differences Forward Differences
5 2
10 10
simplex gradient norm




0 0
function value




10 10



5 2
10 10



10 4
10 10
0 50 100 150 200 250 0 50 100 150 200 250
function evaluations function evaluations


Figure 7.5: Parameter ID, tol = 10’6




Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.
Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




IMPLICIT FILTERING 131



Central Differences Central Differences
10 20
10 10
simplex gradient norm




0 0




function value
10 10



10 20
10 10



20 40
10 10
0 100 200 300 400 0 100 200 300 400
function evaluations function evaluations

Forward Differences Forward Differences
10 5
10 10
simplex gradient norm




0 0
function value




10 10



10 5
10 10



20 10
10 10
0 200 400 600 0 200 400 600
function evaluations function evaluations


Figure 7.6: Unperturbed Quadratic, N = 4




Central Differences Central Differences
2 5
10 10
simplex gradient norm




0
10
0
function value




10
2
10
5
10
4
10


6 10
10 10
0 1000 2000 3000 4000 0 1000 2000 3000 4000
function evaluations function evaluations

Forward Differences Forward Differences
5 5
10 10
simplex gradient norm




0 0
function value




10 10



5 5
10 10



10 10
10 10
0 1000 2000 3000 4000 0 1000 2000 3000 4000
function evaluations function evaluations


Figure 7.7: Unperturbed Quadratic, N = 32




Buy this book from SIAM at http://www.ec-securehost.com/SIAM/FR18.html.
Copyright ©1999 by the Society for Industrial and Applied Mathematics. This electronic version is for personal use and may not be duplicated or distributed.




132 ITERATIVE METHODS FOR OPTIMIZATION



Central Differences Central Differences
2 2
10 10
simplex gradient norm




0 0




function value
10 10



2 2
10 10



4 4
10 10
0 1000 2000 3000 4000 0 1000 2000 3000 4000
function evaluations function evaluations

Forward Differences Forward Differences
2 2
10 10
simplex gradient norm




0
10
0
function value
10
2
10
2
10
4
10


6 4
10 10
0 100 200 300 400 0 100 200 300 400
function evaluations function evaluations


Figure 7.8: Perturbed Quadratic, N = 4




Central Differences Central Differences
2 4
10 10
simplex gradient norm




1 2

<< . .

. 23
( : 32)



. . >>