<< . .

. 2
( : 95)



. . >>

6.6.2 Techniques for Multiple Roots . . . . . . . . . . . 275
6.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 276
6.7.1 Analysis of the State Equation for a Real Gas . . 276
6.7.2 Analysis of a Nonlinear Electrical Circuit . . . . . 277
6.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279

7. Nonlinear Systems and Numerical Optimization 281
7.1 Solution of Systems of Nonlinear Equations . . . . . . . . 282
7.1.1 Newton™s Method and Its Variants . . . . . . . . . 283
7.1.2 Modi¬ed Newton™s Methods . . . . . . . . . . . . . 284
7.1.3 Quasi-Newton Methods . . . . . . . . . . . . . . . 288
7.1.4 Secant-Like Methods . . . . . . . . . . . . . . . . . 288
7.1.5 Fixed-Point Methods . . . . . . . . . . . . . . . . . 290
7.2 Unconstrained Optimization . . . . . . . . . . . . . . . . . 294
7.2.1 Direct Search Methods . . . . . . . . . . . . . . . . 295
7.2.2 Descent Methods . . . . . . . . . . . . . . . . . . . 300
7.2.3 Line Search Techniques . . . . . . . . . . . . . . . 302
7.2.4 Descent Methods for Quadratic Functions . . . . . 304
7.2.5 Newton-Like Methods for Function Minimization . 307
7.2.6 Quasi-Newton Methods . . . . . . . . . . . . . . . 308
xvi Contents

7.2.7 Secant-Like Methods . . . . . . . . . . . . . . . . . 309
7.3 Constrained Optimization . . . . . . . . . . . . . . . . . . 311
7.3.1 Kuhn-Tucker Necessary Conditions for
Nonlinear Programming . . . . . . . . . . . . . . . 313
7.3.2 The Penalty Method . . . . . . . . . . . . . . . . . 315
7.3.3 The Method of Lagrange Multipliers . . . . . . . . 317
7.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 319
7.4.1 Solution of a Nonlinear System Arising from
Semiconductor Device Simulation . . . . . . . . . . 320
7.4.2 Nonlinear Regularization of a Discretization Grid . 323
7.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325

8. Polynomial Interpolation 327
8.1 Polynomial Interpolation . . . . . . . . . . . . . . . . . . . 328
8.1.1 The Interpolation Error . . . . . . . . . . . . . . . 329
8.1.2 Drawbacks of Polynomial Interpolation on Equally
Spaced Nodes and Runge™s Counterexample . . . . 330
8.1.3 Stability of Polynomial Interpolation . . . . . . . . 332
8.2 Newton Form of the Interpolating Polynomial . . . . . . . 333
8.2.1 Some Properties of Newton Divided Di¬erences . . 335
8.2.2 The Interpolation Error Using Divided Di¬erences 337
8.3 Piecewise Lagrange Interpolation . . . . . . . . . . . . . . 338
8.4 Hermite-Birko¬ Interpolation . . . . . . . . . . . . . . . . 341
8.5 Extension to the Two-Dimensional Case . . . . . . . . . . 343
8.5.1 Polynomial Interpolation . . . . . . . . . . . . . . 343
8.5.2 Piecewise Polynomial Interpolation . . . . . . . . . 344
8.6 Approximation by Splines . . . . . . . . . . . . . . . . . . 348
8.6.1 Interpolatory Cubic Splines . . . . . . . . . . . . . 349
8.6.2 B-Splines . . . . . . . . . . . . . . . . . . . . . . . 353
8.7 Splines in Parametric Form . . . . . . . . . . . . . . . . . 357
8.7.1 B´zier Curves and Parametric B-Splines . . . . . .
e 359
8.8 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 362
8.8.1 Finite Element Analysis of a Clamped Beam . . . 363
8.8.2 Geometric Reconstruction Based on
Computer Tomographies . . . . . . . . . . . . . . . 366
8.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368

9. Numerical Integration 371
9.1 Quadrature Formulae . . . . . . . . . . . . . . . . . . . . . 371
9.2 Interpolatory Quadratures . . . . . . . . . . . . . . . . . . 373
9.2.1 The Midpoint or Rectangle Formula . . . . . . . . 373
9.2.2 The Trapezoidal Formula . . . . . . . . . . . . . . 375
9.2.3 The Cavalieri-Simpson Formula . . . . . . . . . . . 377
9.3 Newton-Cotes Formulae . . . . . . . . . . . . . . . . . . . 378
9.4 Composite Newton-Cotes Formulae . . . . . . . . . . . . . 383
Contents xvii

9.5 Hermite Quadrature Formulae . . . . . . . . . . . . . . . . 386
9.6 Richardson Extrapolation . . . . . . . . . . . . . . . . . . 387
9.6.1 Romberg Integration . . . . . . . . . . . . . . . . . 389
9.7 Automatic Integration . . . . . . . . . . . . . . . . . . . . 391
9.7.1 Non Adaptive Integration Algorithms . . . . . . . 392
9.7.2 Adaptive Integration Algorithms . . . . . . . . . . 394
9.8 Singular Integrals . . . . . . . . . . . . . . . . . . . . . . . 398
9.8.1 Integrals of Functions with Finite
Jump Discontinuities . . . . . . . . . . . . . . . . . 398
9.8.2 Integrals of In¬nite Functions . . . . . . . . . . . . 398
9.8.3 Integrals over Unbounded Intervals . . . . . . . . . 401
9.9 Multidimensional Numerical Integration . . . . . . . . . . 402
9.9.1 The Method of Reduction Formula . . . . . . . . . 403
9.9.2 Two-Dimensional Composite Quadratures . . . . . 404
9.9.3 Monte Carlo Methods for
Numerical Integration . . . . . . . . . . . . . . . . 407
9.10 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 408
9.10.1 Computation of an Ellipsoid Surface . . . . . . . . 408
9.10.2 Computation of the Wind Action on a
Sailboat Mast . . . . . . . . . . . . . . . . . . . . . 410
9.11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412

PART IV: Transforms, Di¬erentiation
and Problem Discretization

10. Orthogonal Polynomials in Approximation Theory 415
10.1 Approximation of Functions by Generalized Fourier Series 415
10.1.1 The Chebyshev Polynomials . . . . . . . . . . . . . 417
10.1.2 The Legendre Polynomials . . . . . . . . . . . . . 419
10.2 Gaussian Integration and Interpolation . . . . . . . . . . . 419
10.3 Chebyshev Integration and Interpolation . . . . . . . . . . 424
10.4 Legendre Integration and Interpolation . . . . . . . . . . . 426
10.5 Gaussian Integration over Unbounded Intervals . . . . . . 428
10.6 Programs for the Implementation of Gaussian Quadratures 429
10.7 Approximation of a Function in the Least-Squares Sense . 431
10.7.1 Discrete Least-Squares Approximation . . . . . . . 431
10.8 The Polynomial of Best Approximation . . . . . . . . . . . 433
10.9 Fourier Trigonometric Polynomials . . . . . . . . . . . . . 435
10.9.1 The Gibbs Phenomenon . . . . . . . . . . . . . . . 439
10.9.2 The Fast Fourier Transform . . . . . . . . . . . . . 440
10.10 Approximation of Function Derivatives . . . . . . . . . . . 442
10.10.1 Classical Finite Di¬erence Methods . . . . . . . . . 442
10.10.2 Compact Finite Di¬erences . . . . . . . . . . . . . 444
10.10.3 Pseudo-Spectral Derivative . . . . . . . . . . . . . 448
10.11 Transforms and Their Applications . . . . . . . . . . . . . 450
xviii Contents

10.11.1 The Fourier Transform . . . . . . . . . . . . . . . . 450
10.11.2 (Physical) Linear Systems and Fourier Transform . 453
10.11.3 The Laplace Transform . . . . . . . . . . . . . . . 455
10.11.4 The Z-Transform . . . . . . . . . . . . . . . . . . . 457
10.12 The Wavelet Transform . . . . . . . . . . . . . . . . . . . . 458
10.12.1 The Continuous Wavelet Transform . . . . . . . . 458
10.12.2 Discrete and Orthonormal Wavelets . . . . . . . . 461
10.13 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 463
10.13.1 Numerical Computation of Blackbody Radiation . 463
10.13.2 Numerical Solution of Schr¨dinger Equation . . .
o . 464
10.14 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467

11. Numerical Solution of Ordinary Di¬erential Equations 469
11.1 The Cauchy Problem . . . . . . . . . . . . . . . . . . . . . 469
11.2 One-Step Numerical Methods . . . . . . . . . . . . . . . . 472
11.3 Analysis of One-Step Methods . . . . . . . . . . . . . . . . 473
11.3.1 The Zero-Stability . . . . . . . . . . . . . . . . . . 475
11.3.2 Convergence Analysis . . . . . . . . . . . . . . . . 477
11.3.3 The Absolute Stability . . . . . . . . . . . . . . . . 479
11.4 Di¬erence Equations . . . . . . . . . . . . . . . . . . . . . 482
11.5 Multistep Methods . . . . . . . . . . . . . . . . . . . . . . 487
11.5.1 Adams Methods . . . . . . . . . . . . . . . . . . . 490
11.5.2 BDF Methods . . . . . . . . . . . . . . . . . . . . 492
11.6 Analysis of Multistep Methods . . . . . . . . . . . . . . . . 492
11.6.1 Consistency . . . . . . . . . . . . . . . . . . . . . . 493
11.6.2 The Root Conditions . . . . . . . . . . . . . . . . . 494
11.6.3 Stability and Convergence Analysis for
Multistep Methods . . . . . . . . . . . . . . . . . . 495
11.6.4 Absolute Stability of Multistep Methods . . . . . . 499
11.7 Predictor-Corrector Methods . . . . . . . . . . . . . . . . . 502
11.8 Runge-Kutta Methods . . . . . . . . . . . . . . . . . . . . 508
11.8.1 Derivation of an Explicit RK Method . . . . . . . 511
11.8.2 Stepsize Adaptivity for RK Methods . . . . . . . . 512
11.8.3 Implicit RK Methods . . . . . . . . . . . . . . . . 514
11.8.4 Regions of Absolute Stability for RK Methods . . 516
11.9 Systems of ODEs . . . . . . . . . . . . . . . . . . . . . . . 517
11.10 Sti¬ Problems . . . . . . . . . . . . . . . . . . . . . . . . . 519
11.11 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 521
11.11.1 Analysis of the Motion of a Frictionless Pendulum 522
11.11.2 Compliance of Arterial Walls . . . . . . . . . . . . 523
11.12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527

12. Two-Point Boundary Value Problems 531
12.1 A Model Problem . . . . . . . . . . . . . . . . . . . . . . . 531
12.2 Finite Di¬erence Approximation . . . . . . . . . . . . . . . 533
Contents xix

12.2.1 Stability Analysis by the Energy Method . . . . . 534
12.2.2 Convergence Analysis . . . . . . . . . . . . . . . . 538
12.2.3 Finite Di¬erences for Two-Point Boundary
Value Problems with Variable Coe¬cients . . . . . 540
12.3 The Spectral Collocation Method . . . . . . . . . . . . . . 542
12.4 The Galerkin Method . . . . . . . . . . . . . . . . . . . . . 544
12.4.1 Integral Formulation of Boundary-Value Problems 544
12.4.2 A Quick Introduction to Distributions . . . . . . . 546
12.4.3 Formulation and Properties of the
Galerkin Method . . . . . . . . . . . . . . . . . . . 547
12.4.4 Analysis of the Galerkin Method . . . . . . . . . . 548
12.4.5 The Finite Element Method . . . . . . . . . . . . . 550
12.4.6 Implementation Issues . . . . . . . . . . . . . . . . 556
12.4.7 Spectral Methods . . . . . . . . . . . . . . . . . . . 559
12.5 Advection-Di¬usion Equations . . . . . . . . . . . . . . . . 560
12.5.1 Galerkin Finite Element Approximation . . . . . . 561
12.5.2 The Relationship Between Finite Elements and
Finite Di¬erences; the Numerical Viscosity . . . . 563
12.5.3 Stabilized Finite Element Methods . . . . . . . . . 567
12.6 A Quick Glance to the Two-Dimensional Case . . . . . . . 572
12.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 575
12.7.1 Lubrication of a Slider . . . . . . . . . . . . . . . . 575
12.7.2 Vertical Distribution of Spore
Concentration over Wide Regions . . . . . . . . . . 576
12.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578

13. Parabolic and Hyperbolic Initial Boundary
Value Problems 581
13.1 The Heat Equation . . . . . . . . . . . . . . . . . . . . . . 581
13.2 Finite Di¬erence Approximation of the Heat Equation . . 584
13.3 Finite Element Approximation of the Heat Equation . . . 586
13.3.1 Stability Analysis of the θ-Method . . . . . . . . . 588
13.4 Space-Time Finite Element Methods for the
Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . 593
13.5 Hyperbolic Equations: A Scalar Transport Problem . . . . 597
13.6 Systems of Linear Hyperbolic Equations . . . . . . . . . . 599
13.6.1 The Wave Equation . . . . . . . . . . . . . . . . . 601
13.7 The Finite Di¬erence Method for Hyperbolic Equations . . 602
13.7.1 Discretization of the Scalar Equation . . . . . . . . 602
13.8 Analysis of Finite Di¬erence Methods . . . . . . . . . . . . 605
13.8.1 Consistency . . . . . . . . . . . . . . . . . . . . . . 605
13.8.2 Stability . . . . . . . . . . . . . . . . . . . . . . . . 605
13.8.3 The CFL Condition . . . . . . . . . . . . . . . . . 606
13.8.4 Von Neumann Stability Analysis . . . . . . . . . . 608
13.9 Dissipation and Dispersion . . . . . . . . . . . . . . . . . . 611
xx Contents

13.9.1 Equivalent Equations . . . . . . . . . . . . . . . . 614
13.10 Finite Element Approximation of Hyperbolic Equations . . 618
13.10.1 Space Discretization with Continuous and
Discontinuous Finite Elements . . . . . . . . . . . 618
13.10.2 Time Discretization . . . . . . . . . . . . . . . . . 620
13.11 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 623
13.11.1 Heat Conduction in a Bar . . . . . . . . . . . . . . 623
13.11.2 A Hyperbolic Model for Blood Flow
Interaction with Arterial Walls . . . . . . . . . . . 623
13.12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625

References 627

Index of MATLAB Programs 643

Index 647
1
Foundations of Matrix Analysis




In this chapter we recall the basic elements of linear algebra which will be
employed in the remainder of the text. For most of the proofs as well as
for the details, the reader is referred to [Bra75], [Nob69], [Hal58]. Further
results on eigenvalues can be found in [Hou75] and [Wil65].




1.1 Vector Spaces
De¬nition 1.1 A vector space over the numeric ¬eld K (K = R or K = C)
is a nonempty set V , whose elements are called vectors and in which two
operations are de¬ned, called addition and scalar multiplication, that enjoy
the following properties:

1. addition is commutative and associative;

2. there exists an element 0 ∈ V (the zero vector or null vector) such
that v + 0 = v for each v ∈ V ;

3. 0 · v = 0, 1 · v = v, where 0 and 1 are respectively the zero and the
unity of K;

4. for each element v ∈ V there exists its opposite, ’v, in V such that
v + (’v) = 0;
2 1. Foundations of Matrix Analysis

5. the following distributive properties hold

∀± ∈ K, ∀v, w ∈ V, ±(v + w) = ±v + ±w,

∀±, β ∈ K, ∀v ∈ V, (± + β)v = ±v + βv;

6. the following associative property holds

∀±, β ∈ K, ∀v ∈ V, (±β)v = ±(βv).




Example 1.1 Remarkable instances of vector spaces are:
- V = Rn (respectively V = Cn ): the set of the n-tuples of real (respectively
complex) numbers, n ≥ 1;
- V = Pn : the set of polynomials pn (x) = n ak xk with real (or complex)
k=0
coe¬cients ak having degree less than or equal to n, n ≥ 0;
- V = C p ([a, b]): the set of real (or complex)-valued functions which are con-
tinuous on [a, b] up to their p-th derivative, 0 ¤ p < ∞. •

De¬nition 1.2 We say that a nonempty part W of V is a vector subspace
of V i¬ W is a vector space over K.

Example 1.2 The vector space Pn is a vector subspace of C ∞ (R), which is the
space of in¬nite continuously di¬erentiable functions on the real line. A trivial

subspace of any vector space is the one containing only the zero vector.

In particular, the set W of the linear combinations of a system of p vectors
of V , {v1 , . . . , vp }, is a vector subspace of V , called the generated subspace
or span of the vector system, and is denoted by

= span {v1 , . . . , vp }
W
= {v = ±1 v1 + . . . + ±p vp with ±i ∈ K, i = 1, . . . , p} .

The system {v1 , . . . , vp } is called a system of generators for W .
If W1 , . . . , Wm are vector subspaces of V , then the set

S = {w : w = v1 + . . . + vm with vi ∈ Wi , i = 1, . . . , m}

is also a vector subspace of V . We say that S is the direct sum of the
subspaces Wi if any element s ∈ S admits a unique representation of the
form s = v1 + . . . + vm with vi ∈ Wi and i = 1, . . . , m. In such a case, we
shall write S = W1 • . . . • Wm .
1.2 Matrices 3

De¬nition 1.3 A system of vectors {v1 , . . . , vm } of a vector space V is
called linearly independent if the relation

±1 v1 + ±2 v2 + . . . + ±m vm = 0

with ±1 , ±2 , . . . , ±m ∈ K implies that ±1 = ±2 = . . . = ±m = 0. Otherwise,
the system will be called linearly dependent.

We call a basis of V any system of linearly independent generators of V .
If {u1 , . . . , un } is a basis of V , the expression v = v1 u1 + . . . + vn un is
called the decomposition of v with respect to the basis and the scalars
v1 , . . . , vn ∈ K are the components of v with respect to the given basis.
Moreover, the following property holds.

Property 1.1 Let V be a vector space which admits a basis of n vectors.
Then every system of linearly independent vectors of V has at most n el-
ements and any other basis of V has n elements. The number n is called
the dimension of V and we write dim(V ) = n.
If, instead, for any n there always exist n linearly independent vectors of
V , the vector space is called in¬nite dimensional.

Example 1.3 For any integer p the space C p ([a, b]) is in¬nite dimensional. The
spaces Rn and Cn have dimension equal to n. The usual basis for Rn is the set of
unit vectors {e1 , . . . , en } where (ei )j = δij for i, j = 1, . . . n, where δij denotes
the Kronecker symbol equal to 0 if i = j and 1 if i = j. This choice is of course

not the only one that is possible (see Exercise 2).



1.2 Matrices
Let m and n be two positive integers. We call a matrix having m rows and
n columns, or a matrix m — n, or a matrix (m, n), with elements in K, a
set of mn scalars aij ∈ K, with i = 1, . . . , m and j = 1, . . . n, represented
in the following rectangular array
® 
a11 a12 . . . a1n
 a21 a22 . . . a2n 
 
A= . . . (1.1)
.
°. .»
.
. . .
am1 am2 ... amn

When K = R or K = C we shall respectively write A ∈ Rm—n or A ∈
Cm—n , to explicitly outline the numerical ¬elds which the elements of A
belong to. Capital letters will be used to denote the matrices, while the
lower case letters corresponding to those upper case letters will denote the
matrix entries.
4 1. Foundations of Matrix Analysis

We shall abbreviate (1.1) as A = (aij ) with i = 1, . . . , m and j = 1, . . . n.
The index i is called row index, while j is the column index. The set
(ai1 , ai2 , . . . , ain ) is called the i-th row of A; likewise, (a1j , a2j , . . . , amj )
is the j-th column of A.
If n = m the matrix is called squared or having order n and the set of
the entries (a11 , a22 , . . . , ann ) is called its main diagonal.
A matrix having one row or one column is called a row vector or column
vector respectively. Unless otherwise speci¬ed, we shall always assume that
a vector is a column vector. In the case n = m = 1, the matrix will simply
denote a scalar of K.
Sometimes it turns out to be useful to distinguish within a matrix the set
made up by speci¬ed rows and columns. This prompts us to introduce the
following de¬nition.

De¬nition 1.4 Let A be a matrix m — n. Let 1 ¤ i1 < i2 < . . . < ik ¤ m
and 1 ¤ j1 < j2 < . . . < jl ¤ n two sets of contiguous indexes. The matrix
S(k — l) of entries spq = aip jq with p = 1, . . . , k, q = 1, . . . , l is called a
submatrix of A. If k = l and ir = jr for r = 1, . . . , k, S is called a principal
submatrix of A.

De¬nition 1.5 A matrix A(m — n) is called block partitioned or said to
be partitioned into submatrices if
® 
A11 A12 ... A1l
 A21 A22 
... A2l
 
A= . ,
. .
..
°. »
. .
.
. . .
Ak1 Ak2 ... Akl
where Aij are submatrices of A.
Among the possible partitions of A, we recall in particular the partition by
columns

A = (a1 , a2 , . . . , an ),

ai being the i-th column vector of A. In a similar way the partition by rows
of A can be de¬ned. To ¬x the notations, if A is a matrix m — n, we shall
denote by

A(i1 : i2 , j1 : j2 ) = (aij ) i1 ¤ i ¤ i2 , j1 ¤ j ¤ j2

the submatrix of A of size (i2 ’ i1 + 1) — (j2 ’ j1 + 1) that lies between the

<< . .

. 2
( : 95)



. . >>