of the algorithm consists of ¬tting routines for those parts of the starting

grid that are located near the boundary and of simplicial subdivisions of

the obtained geometric elements. The ¬tting procedures perform recursive

subdivisions of the boundary rectangles or rectangular parallelepipeds in

such a way that at the end every geometric element contains at most one

geometry de¬ning point (i.e., a vertex of „¦ or a point of ‚„¦, where the

type of boundary conditions changes). Finally, the so-called smoothing step,

which optimizes the grid with respect to a certain regularity criterion, can

be supplemented; see Section 4.1.4.

Typically, grids generated by overlay methods are close to structured

grids in the interior of the domain. Near the boundary, they lose the

structure. Further details can be found in the references [68] and [72].

Delaunay Triangulations

The core algorithm of these methods generates, for a given cloud of isolated

points (nodes), a triangulation of their convex hull. Therefore, a grid gen-

erator based on this principle has to include a procedure for the generation

of this point set (for example, the points resulting from an overlay method)

as well as certain ¬tting procedures (to cover, for example, nonconvex

domains, too).

The Delaunay triangulation of the convex hull of a given point set in

R is characterized by the following property (empty sphere criterion, Fig-

d

ure 4.1): Any open d-ball, the boundary of which contains d + 1 points

from the given set, does not contain any other points from that set. The

triangulation can be generated from the so-called Voronoi tesselation of Rd

for the given point set. In two dimensions, this procedure is described in

Chapter 6, which deals with ¬nite volume methods (Section 6.2.1). How-

Figure 4.1. Empty sphere criterion in two dimensions (d = 2).

4.1. Grid Generation 179

ever, practical algorithms ([48] or [71]) apply the empty sphere criterion

more directly.

The interesting theoretical properties of Delaunay triangulations are one

of the reasons for the “popularity” of this method. In two dimensions, the

so-called max-min-angle property is valid: Among all triangulations of the

convex hull G of a given point set, the Delaunay triangulation maximizes

the minimal interior angle over all triangles. In the case d = 3, this nice

property does not remain true. In contrast, even badly shaped elements (the

so-called sliver elements) may occur. A further important property of a two-

dimensional Delaunay triangulation is that the sum of two angles opposite

an interior edge is not more than π. For example, such a requirement is a

part of the angle condition formulated in Section 3.9.

Advancing Front Methods

The idea of these methods, which are also known in the literature (see, e.g.,

[50], [56], [60], [62]) as moving front methods, is to generate a triangulation

recursively from a discretization of the current boundary. The methods

start with a partition of the boundary of G0 := „¦. For d = 2, this “initial

front” is a polygonal line, whereas in d = 3 it is a triangulation of a curved

surface (the so-called “2.5-dimensional triangulation”). The method con-

sists of an iteration of the following general step (Figure 4.2): An element

of the current front (i.e., a straight-line segment or a triangle) is taken

and then, either generating a new inner point or taking an already existing

point, a new simplex Kj that belongs to Gj’1 is de¬ned. After the data of

the new simplex are saved, the simplex is deleted from Gj’1 . In this way,

a smaller domain Gj with a new boundary ‚Gj (a new “current front”)

results. The general step is repeated until the current front is empty. Of-

ten, the grid generation process is supplemented by the so-called smoothing

step; see Section 4.1.4.

¢d d

d d

;

¢ Kj d d

Gj’1 Gj

¢

¡ ¡

¢

d d

¡ ¡

d d

d d

Figure 4.2. Step j of the advancing front method: The new simplex Kj is deleted

from the domain Gj’1 .

180 4. Grid Generation and A Posteriori Error Estimation

4.1.3 Generation of Quadrilateral and Hexahedral Grids

Grids consisting of quadrilaterals or hexahedra can also be generated by

means of overlay methods (e.g., [66]) or advancing front methods (e.g.,

[46], [47]). An interesting application of simplicial advancing front meth-

ods in the two-dimensional case is given in the paper [73]. The method is

based on the simple fact that any two triangles sharing a common edge

form a quadrilateral. Obviously, a necessary condition for the success of

the method is that the triangulation should consist of an even number of

triangles. Unfortunately, the generalization of the method to the three-

dimensional situation is di¬cult, because a comparatively large number of

adjacent tetrahedra should be united to form a hexahedron.

Multiblock Methods

The basic idea of these methods is to partition the domain into a small num-

ber of subdomains (“blocks”) of simple shape (quadrilaterals, hexahedra,

as well as triangles, tetrahedra, prisms, pyramids, etc.) and then generate

structured or logically structured grids in the individual subdomains (see,

e.g., [23], [33]).

In multiblock grids, special attention has to be devoted to the treatment

of common boundaries of adjacent blocks. Unless special discretization

methods such as, for example, the so-called mortar ¬nite element method

(cf. [45]) are used in this situation, there may be a con¬‚ict between certain

compatibility conditions at the common block interfaces (to ensure, e.g.,

the continuity of the ¬nite element functions across the interfaces) on the

one hand and the output directives of an error estimation procedure that

may advise to re¬ne a block-internal grid locally on the other hand.

Hierarchically Structured Grids

These grids are a further, hybrid variant of structured and unstructured

grids, though not yet very widespread. Starting with a logically structured

grid, hierarchically structured grids are generated by a further logically

structured re¬nement of certain subdomains. As in multiblock methods,

the interfaces between blocks of di¬erent re¬nement degrees have to be

treated carefully.

Combined Grids

Especially in three-dimensional situations, the generation of “purely” hexa-

hedral grids may be very di¬cult for complicated geometries of the domain.

Therefore, the so-called combined grids that consist of hexahedral grids in

geometrically simple subdomains and tetrahedral, prismatic, pyramidal,

etc. grids in more critical subregions are used.

Chimera Grids

These grids are also called overset grids (see, e.g., [51]). In contrast to the

multiblock grids described above, here the domain is covered by a compar-

4.1. Grid Generation 181

atively small number of domains of simple shape, and then structured or

logically structured grids are generated on the individual domains. That is,

a certain overlapping of the blocks and thus of the subgrids is admitted.

4.1.4 Grid Optimization

Many grid generation codes include “smoothing algorithms” that optimize

the grid with respect to certain regularity criteria. In the so-called r-method

(relocation method) the nodes are slightly moved, keeping the logical struc-

ture (connectivities) of the grid ¬xed. Another approach is to improve the

grid connectivities themselves.

A typical example for r-methods is given by the so-called Laplacian

smoothing (or barycentric smoothing), where any inner grid point is moved

into the barycentre of its neighbours (see [50]). A local weighting of selected

neighbours can also be used (weighted barycentric smoothing). From a for-

mal point of view, the application of the Laplacian smoothing corresponds

to the solution of a system of linear algebraic equations that is obtained

from the equations of the arithmetic (or weighted) average of the nodes.

The matrix of this system is large but sparse. The structure of this matrix

is very similar to the one that results from a ¬nite volume discretization

of the Poisson equation as described in Section 6.2 (see the correspond-

ing special case of (6.9)). In general, there is no need to solve this system

exactly. Typically, only one to three steps of a simple iterative solver (as

presented in Section 5.1) are performed. When the domain is almost con-

vex, Laplacian smoothing will produce good results. It is also clear that for

strongly nonconvex domains or other special situations, the method may

produce invalid grids.

Among the methods to optimize the grid connectivities, the so-called

2:1-rule and, in the two-dimensional case, the edge swap (or diagonal swap,

[59]) are well known. The 2:1-rule is used within the quadtree or octree

method to reduce the di¬erence of the re¬nement levels between neighbour-

ing quadrilaterals or hexahedra to one by means of additional re¬nement

steps; see Figure 4.3.

In the edge swap method, a triangular grid is improved. Since any two

triangles sharing an edge form a convex quadrilateral, the method decides

which of the two diagonals of the quadrilateral optimizes a given criterion.

If the optimal diagonal does not coincide with the common edge, the other

con¬guration will be taken; i.e., the edge will be swapped.

Finally, it should be mentioned that there exist grid optimization

methods that delete nodes or even complete elements from the grid.

4.1.5 Grid Re¬nement

A typical grid re¬nement algorithm for a triangular grid, the so-called

red/green re¬nement, has previously been introduced in Section 2.4.1. A

182 4. Grid Generation and A Posteriori Error Estimation

dq dq

q q

e e

;

eq eq

d d

dqr dqr

r r

rq rq

t t

t t

Figure 4.3. 2:1-rule.

further class of methods is based on bisection, that is, a triangle is divided

by the median of an edge. A method of bisection is characterized by the

number of bisections used within one re¬nement step (stage number of the

method of bisection) and by the criterion of how to select the edge where

the new node is to be located. A popular strategy is to take the longest of

the three edges. The general (recursive) re¬nement step for some triangle

K is of the following form:

(i) Find the longest edge of K and insert the median connecting the

midpoint of that edge with the opposite vertex.

(ii) If the resulting new node is not a vertex of an already existing triangle

or is not a boundary point of the domain „¦, then the adjacent triangle

that shares the re¬ned edge has to be divided, too.

Since the longest edge of the adjacent triangle need not coincide with the

common edge, the application of this scheme leads to a nonconforming

triangulation, in general. To obtain a conforming triangulation, all new

nodes resulting from substep (i) have to be detected, and then certain

closure rules have to be applied.

The red/green re¬nement as well as the method of bisection can be gener-

alized to the three-dimensional case. However, since the number of di¬erent

con¬gurations is signi¬cantly larger than in the case d = 2, only a few

illustrative examples will be given.

The red/green re¬nement of a tetrahedron K (see Figure 4.4) yields a

partition of K into eight subtetrahedra with the following properties: All

vertices of the subtetrahedra coincide either with vertices or with edge mid-

points of K. At all the faces of K, the two-dimensional red/green re¬nement

scheme can be observed.

In addition to the di¬culties arising in the two-dimensional situation,

the (one-stage) bisection applied to the longest edge of a tetrahedron also

yields faces that violate the conformity conditions. Therefore, the closure

rules are rather complicated, and in practice, multistage (often three-stage)

Exercises 183

Figure 4.4. Representation of the red/green re¬nement of a tetrahedron.

methods of bisection are used to circumvent these di¬culties (see Figure

4.5).

Figure 4.5. Representation of the bisection of a tetrahedron.

Grid re¬nement may be necessary in those parts of the domain where

the weak solution of the variational equation has low regularity. The ¬gure

of the front cover (taken from [70]) shows the domain for a density-driven

¬‚ow problem, where the in¬‚ow and the out¬‚ow pass through very small,

nearly point-sized surfaces. The re¬nement is the result of a grid adaptation

strategy based on a posteriori error estimators (see Section 4.2). In time-

dependent problems, where those parts of the grid in which a re¬nement is

needed may also vary, grid coarsening is necessary to limit the expense. A

simple grid coarsening can be achieved, for example, by cancelling former

re¬nement steps in a conforming way.

Exercises

4.1 For a given triangle K, the circumcentre can be computed by ¬nding

the intersection of the perpendicular bisectors associated with two edges

184 4. Grid Generation and A Posteriori Error Estimation

of K. This can be achieved by solving a linear system of equations with

respect to the coordinates of the circumcentre.

(a) Give such a system.

(b) How can the radius of the circumcircle be obtained from this solution?

4.2 Given a triangle K, denote by hi the length of edge i, i ∈ {1, 2, 3}.

Prove that the following expression equals the radius of the circumcircle

(without using the circumcentre!):

h1 h2 h3

.

4|K|

4.3 Let K1 , K2 be two triangles sharing an edge.

(a) Show the equivalence of the following edge swap criteria:

Angle criterion: Select the diagonal of the so-formed quadrilateral

that maximizes the minimum of the six interior angles among the

two con¬gurations.

Circle criterion: Choose the diagonal of the quadrilateral for which

the open circumcircle disks to the resulting triangles do not contain

any of the remaining vertices.

(b) If ±1 , ±2 denote the two interior angles that are located opposite the

common edge of the triangles K1 and K2 , respectively, then the circle

criterion states that an edge swap is to be performed if

±1 + ±2 > π.

Prove this assertion.

(c) The criterion in (b) is numerically expensive. Show that the following

test is equivalent:

[(a1,1 ’ a3,1 )(a2,1 ’ a3,1 ) + (a1,2 ’ a3,2 )(a2,2 ’ a3,2 )]

—[(a2,1 ’ a4,1 )(a1,2 ’ a4,2 ) ’ (a1,1 ’ a4,1 )(a2,2 ’ a4,2 )]

< [(a2,1 ’ a4,1 )(a1,1 ’ a4,1 ) + (a2,2 ’ a4,2 )(a1,2 ’ a4,2 )]

—[(a2,1 ’ a3,1 )(a1,2 ’ a3,2 ) ’ (a1,1 ’ a3,1 )(a2,2 ’ a3,2 )] .

Here ai = (ai,1 , ai,2 )T , i ∈ {1, 2, 3}, denote the vertices of a triangle

ordered clockwise, and a4 = (a4,1 , a4,2 )T is the remaining vertex of

the quadrilateral, the position of which is tested in relation to the

circumcircle de¬ned by a1 , a2 , a3 .

Hint: Addition theorems for the sin function.

4.2. A Posteriori Error Estimates and Grid Adaptation 185

4.2 A Posteriori Error Estimates and Grid

Adaptation

In the practical application of discretization methods to partial di¬erential

equations, an important question is how much the computed approximative

solution uh deviates from the weak solution u of the given problem.

Typically, a certain norm of the error u ’ uh is taken as a measure of

this deviation. For elliptic or parabolic di¬erential equations of order two,

a common norm to quantify the error is the energy norm (respectively an

equivalent norm) or the L2 -norm. Some practically important problems

involve the approximation of the so-called derived quantities which can be

mathematically interpreted in terms of values of certain linear functionals

of the solution u. In such a case, an estimate of the corresponding error is

also of interest.

Example 4.1

ν ·∇u dσ: ¬‚ux of u through a part of the boundary “0 ‚ ‚„¦,

J(u) = “0

integral mean of u on some subdomain „¦0 ‚ „¦.

J(u) = „¦0 u dx:

In the following we will consider some estimates for a norm · of the

error u ’ uh and explain the corresponding terminology. Similar statements

remain true if u ’ uh is replaced by |J(u) ’ J(uh )|.

The error estimates given in the previous chapters are characterized by

the fact that no information about the computed solution uh is needed.

Estimates of this type are called a priori error estimates.

For example, consider a variational equation with a bilinear form that

satis¬es (for some space V such that H0 („¦) ‚ V ‚ H 1 („¦) and · := · 1 )

1

the assumptions (2.42), (2.43) and use numerically piecewise linear, con-

tinuous ¬nite elements. Then C´a™s lemma (Theorem 2.17) together with

e

the interpolation error estimate from Theorem 3.29 implies the estimate

M M

u ’ uh ¤

u ’ Ih (u) 1 ¤ Ch , (4.1)

1

± ±

where the constant C depends on the weak solution u of the variational

equality.

Here C has the special form

1/2

|‚ u| dx

¯ ± 2

C=C (4.2)

„¦ |±|=2

¯

with C > 0 independent of u. Unfortunately, the structure of the bound

(4.2) does not allow an immediate numerical application of (4.1).

But even if the constant C could be estimated and (4.1) could be used

to determine the discretization parameter h (maximum diameter of the

triangles in Th ) for a prescribed tolerance, in general this would lead to

186 4. Grid Generation and A Posteriori Error Estimation

a grid that is too ¬ne. This corresponds to an algebraic problem that is

too large. The reason is that the described approach determines a global

parameter, whereas the true error measure may have di¬erent magnitudes

in di¬erent regions of the domain „¦.

So we should aim at error estimates of type

u ’ uh ¤ D· (4.3)

or

D1 · ¤ u ’ u h ¤ D2 · , (4.4)

where the constants D, D1 , D2 > 0 do not depend on the discretization

parameters and

1/2

2

·= ·K . (4.5)

K∈Th

Here the quantities ·K should be computable using only the data ”

including possibly uh |K ” which are known on the particular element K.

If the bounds · (or the terms ·K , respectively) in (4.3) (respectively

(4.4)) depend on uh , i.e., they can be evaluated only if uh is known, then

they are called (local) a posteriori error estimators in the wider sense.

Often the bounds also depend on the weak solution u of the variational

equality, so in fact, they cannot be evaluated immediately. In such a case

they should be replaced by computable quantities that do not depend on u

in a direct way. So, if the bounds can be evaluated without knowing u but

using possibly uh , then they are called (local) a posteriori error estimators

in the strict sense.

Inequalities of the form (4.3) guarantee, for a given tolerance µ > 0, that

the inequality · ¤ µ implies that the error measure does not exceed µ up

to a multiplicative constant. In this sense the error estimator · is called

reliable. Now, if the computed approximative solution uh is su¬ciently

precise in the described sense, then the computation can be ¬nished. If uh

is such that · > µ, then the question of how to modify the discretization

in order to achieve the tolerance or, if the computer resources are nearly

exhausted, how to minimize the overshooting of ·, arises. That is, the