×

Numerical methods. Principles, analysis and algorithms. (English) Zbl 1196.65001

Oxford: Oxford University Press (ISBN 978-0-19-569375-1/pbk). xii, 812 p. (2009).
The well-written book is designed for engineering students of any branch, and is based on undergraduate courses given by the author in the last two decades.
The 812 pages long monograph consists of 23 chapters which are divided into sections. The text is completed by a number of figures, tables, flow charts, step-by-step procedures for the of the described methods, and programming texts. The reader will find a very large number of exercises of any branch and solved examples in all chapters. The book is finished with a relative short bibliography, consisting of monograph titles only, an index with the most important notions, and some answers to end-of-chapter exercises. Historical remarks are integrated in the text.
Besides basic concepts, such as number systems, errors and their propagation, and short introductions to the characteristics of the most important programming languages and of some programming tools, the book contains numerical methods for algebraic and transcendental equations, interpolation and extrapolation, for linear algebra applications (direct and iterative algorithms for linear systems of equations, least squares, eigenvalue and eigenvector problems), for non-linear systems of equations, for numerical differentiation and integration, for initial and boundary value problems of ordinary differential equations and for parabolic, hyperbolic and elliptic differential equations. Parallel computing, the use of neural networks for the treatment of numerical problems, the so-called neurocomputing, and the solution of difference equations are the subject of the last three chapters.
No previous knowledge of the subject is required. Starting with a basic text for beginners more and more advanced topics are treated.
In order to highlight the subject of numerical methods, the author differentiates already in the Preface between the “numbers of the mathematician” and the numbers with finite-precision arithmetic which are used in a computer calculating results according to implemented “constructive methods”. The author uses the notion “constructive method” for that part of mathematics which provides results by actual computation with infinite-precision arithmetic in difference to theorems that state only the existence of a solution without providing any method for finding the result. Taking into account finite-precision arithmetic in “constructive methods” gives numerical methods. The author explicitly advises against an uncritical use of the “constructive methods”. (Following the thoughts of the author, one could receive the impression numerical methods are not a part of mathematics. The notion numerical mathematics is not used.)
Correspondingly to these introductory remarks, the first chapter is devoted to the fundamental concept of number systems. The difference between the “numbers of the mathematician” and their approximation by numbers with finite-precision arithmetic is extensively described. The binary, decimal, octal and hexadecimal number systems, their conversions and their representation in a computer are considered. The handling of overflow and underflow occurring when a number exceeds the storage capacity of the format of a computer is treated. The signed addition, subtraction, multiplication and division rules of the finite-precision arithmetic in the fixed-point and floating-point representation are described in detail including tables of comparative methods for the storing of floating-point numbers in single, double, and extended precision for actual computer number systems defined by IEEE, IBM, and VAX.
In the short Chapter 2, the author points out that the subject of numerical methods is not simply the application of “constructive algorithms” but also the consideration of the computer arithmetic. That is, e.g., demonstrated by examples such as the approximation of a convergent infinite series of mathematics by a finite series in a computer, by mathematically equivalent formulae giving exactly the same results in infinite precision and different solutions in finite precision, and so on. Some practical guidelines to avoid wrong results are summarized. The meaning of absolute and relative errors is described.
The in the first two chapters described inexact results, caused by the finite-precision arithmetic, the consideration of only a finite number of terms in finite series and so on, propagate if these numbers are used for further steps. Thus, this inexactness has to be measured. That is the subject of Chapter 3. A number of errors which influence the precision of the results are described, e.g., the modelling, the measurement, the truncation error, rounding and residual errors. The propagation of the errors in finite-precision representation is demonstrated. The method of extended precision, forward and backward error analysis, error bounds and conventional rounding are considered.
Chapter 4 is devoted to programming languages and tools. The concept of an algorithm is introduced. Starting with a flow chart to describe a computational scheme the author introduces the characteristics of the most known programming languages such as BASIC, PASCAL, FORTRAN, C, C++, COBOL, the programming tool MATLAB, the interactive problem-solving system for numerical and mathematical symbolic computation MATHEMATICA, of worksheets such as Lotus-1-2-3 or Excel, and of Fox Pro/Dbase programming. For simple examples, the reader will find flow charts and the corresponding programming texts or assignments for the above mentioned languages or tools, respectively.
Chapter 5 is focused on the solution of algebraic and transcendental equations, starting with the method of bisection, regula falsi, and its variants, the Ridders method. A geometrical interpretation is included for these and most of the following algorithms of this chapter. General iterative methods are described. Further, the reader will find a linear iterative method, including a sufficient condition with a convergence proof, an error estimate, and round-off error treatment, and the methods of Aitken, Newton-Raphson, Kizner and Brent.
The subject of the next sections is finding roots of polynomial equations. The problem of ill-conditioned polynomials is illustrated by examples. The evaluation of polynomials, the Horner scheme, deflation and iterative methods, such as the secant, the Newton-Raphson, and Lehmer’s method, are described. The Bairstov-Hitchcook method for complex roots, Laguerre’s and Müller’s algorithm, Bernoulli’s procedure, Graeffe’s root-squaring method, and the quotient-difference algorithm are treated, the last including stability analysis and storage requirements.
The numerical solution of linear systems of equations by direct methods is considered in Chapter 6. The theorems about whether homogeneous equations have a trivial solution or an infinite number of solutions and inhomogeneous equations have an infinite number of solutions, no, or a unique solution, are formulated in introductory sections. Starting with the Gaussian elimination of triangular systems and the backward-substitution, the elimination with forward-substitution of general matrices including partial pivoting, error and sensitivity analysis, condition number estimation, iterative refinement, and the factorization modifications of Crout and Doolittle, and the Wilkinson algorithm with row equilibration are developed. Comparisons to the Gauss-Jordan method are given. The Cholesky factorization for positive definite symmetric matrices halving the computational work and the solution of complex systems of linear equations of order \(n\) using a corresponding real system of order \(2n\) are shortly described.
All the methods are well developed, but effective modifications of the elimination methods which use the storage hierarchy (cache memory) of modern computers are not included. Hints to corresponding fundamental packages such as BLAS and LAPACK are missing. The same is to say for the Chapters 7, 9, and 11. Also the treatment of general sparse matrices by direct methods is not covered.
Based on the Gauss-Jordan algorithm, the matrix inversion is treated in Chapter 7 as two-array and as in-place method, both without and with partial pivoting, including the inversion of triangular and of complex matrices, the last using real arithmetic. For the case \(\|A\| < 1\), an iterative procedure for the matrix inversion is demonstrated.
Large-scales linear systems cannot be solved by direct methods because of their high computational time and storage requirements. Thus, Chapter 8 is focused on iterative methods which are also well-applicable to sparse matrices because the original matrix is not changed during the iteration. Point iterative methods and block iterative methods (for \((n,n)\)-matrices partitioned into consistent blocks of order \(m\)) of Jacobi, Gauss-Seidel, and successive over-relaxation (SOR) are deduced. Hints to conditions on the matrix which ensure the convergence of the methods are given. Based on the number of multiplications comparisions between direct, point iterative, and block iterative methods are illustrated. Computational techniques such as the solution of \(m\) linear systems of smaller size in each iteration of a block iteration algorithm by a direct method are discussed.
Modern methods such as Krylov subspace algorithms like GMRES, CG, or BiCGSTAB, preconditioning, and multigrid methods or hints to corresponding software packages are not included.
Linear least-squares problems are considered in Chapter 9. For over-determined systems \(Ax = b\) where \(A\) is a \((m,n)\) matrix, \(m>n\), a vector \(x\) is to be found that minimizes the function \({\rho}^2(x) = {\|b-Ax \|}_2^2\). Starting with the existence of solutions and a criticism of the solution by the earlier used corresponding normal equations, which are in general ill-conditioned, methods such as the \(QR\) decomposition by Gram-Schmidt and Householder, and the Givens \(QR\) factorization are treated in order to solve the least squares problem by two steps decomposition and back-substitution.
The numerical solution of non-linear equations is extended to systems of non-linear equations in Chapter 10. The author starts with the problem formulation and convergence conditions (fixed point theorem) for generalized iteration methods. The fixed point iterative technique is improved by the Seidel iteration, Newton’s method and the Newton-Raphson method are described first for two-dimensional problems and then formulated for the \(n\)-dimensional case. Solving the linear problem in Newton’s method by an inner iteration gives the described Newton-Gauss-Seidel and the Newton-SOR method.
Chapter 11 is devoted to the matrix eigenvalue problem. The special and generalized eigenvalue problem and some properties and applications of eigenvalues are formulated. The characteristic equation of a matrix \(A\) and the polynomial method to determine the eigenvalues and eigenvectors of \(A\) including Leverrier’s algorithm to evaluate the coefficients of the characteristic polynomial are deduced. Danilewsky’s method, an algorithm to compute the eigenvectors when the eigenvalues are known, is described. The power, the inverse power method, and the Rayleigh quotient iteration are treated, which can also be applied to higher order matrices in contrast to the polynomial algorithm. Further, the computation of eigenpairs of symmetric matrices applying the Jacobi method and the modifications of Pope and Tompkin are described. A computational method for the determination of eigenpairs from tridiagonal matrices using a sequence of polynomials is considered as basis for the tridiagonalization of a matrix that can be done, e.g., by the method or the Householder procedure. The \(QL\) algorithm including shifting is described for the determination of eigenvalues of tridiagonal real symmetric matrices. The \(QR\) algorithm is considered for general real matrices. Rutishauser’s \(LU\) method for eigenpairs of arbitrary non-symmetric matrices and the solution of the generalized eigenvalue problems \(Ax = \lambda Bx\), \(A\) symmetric, \(B\) diagonal matrix or symmetric positive definite, are sketched.
After introducing the interpolation and extrapolation problem which can be solved in different ways a large number of methods are described in Chapter 12. Starting with the simple linear interpolation the Lagrangian interpolation method, that has rather a theoretical than a practical value, Aitken-Neville’s method and Newton’s divided difference interpolation are discussed. Introducing a classification of finite difference operators, a number of difference methods are deduced. The reader will find Newton’s forward and backward interpolation formula, the central difference, Gauss’s forward and backward, Stirling’s, and Bessel’s formula, and so on. Further points are the inverse interpolation, Hermite and Chebyshev interpolation, the multi-dimensional and the piecewise polynomial interpolation by natural splines.
Using the knowledge of interpolation differentiation formulae, such as Newton’s forward and backward difference derivatives and Stirling’s central difference derivatives, numerical differentiation problems are treated in Chapter 13 including computational problems in the finite difference approximation of derivatives for solving differential equations.
Chapter 14 is focused on numerical integration. The reader will find descriptions of the trapezoidal rule and Simpson’s rule, of Romberg’s method, the Gaussian quadrature, the spline integration and others including error analysis and comparisions. Further adaptive methods, singular integrals, multiple integration, Monte Carlo methods, and, as an application, the solution of integral equations are covered.
Initial value problems of ordinary differential equations are the subject of Chapter 15. After the formulation of the problem and of the conditions for a unique solution, initially semi-numerical methods such as the Taylor series and Picard’s method are considered, followed by simple difference methods, such as Euler’s, the trapezium, Heun’s, and the midpoint method, and single-step methods, here represented by a number of variants of the Runge-Kutta method including the Runge-Kutta-Fehlberg and the implicit Runge-Kutta method. Included are discretization errors, remarks on consistency and convergence of the methods, and the step size control. The next sections are devoted to multi-step methods in form of predictor-corrector algorithms, such as the Milne-Simpson, the Hamming, and the Adams-Bashforth-Moulton method including comparisons to the single-step procedures, discretization errors, and stability definitions, followed by multi-valued methods and the treatment of systems of first-order ordinary differential equations, higher-order equations and difference methods (Numerov’s method, Störmer’s rule) for second-order conservative ordinary differential equations.
Even though stability definitions are included, the treatment of practically important stiff equations is not.
Boundary value problems for ordinary differential equations are treated in Chapter 16. Starting with the formulation of conditions that ensure the unique solution of the task, the reader is introduced to simple shooting methods which are extended to multiple shooting methods. In the next sections, finite difference methods are discussed taking into account the linear and the non-linear case. In contrast to the above mentioned methods which provide the solutions only on a discrete set of points, the finite element method is able to produce a solution at every point. Considered are the collocation method, the Galerkin method, and the Rayleigh-Ritz method using the calculus of variation and minimizing functionals.
Partial differential equations are treated in Chapters 17–20.
Chapter 17 contains a classification of the equations and of the corresponding initial and boundary conditions. A list of standard partial differential equations (Laplace, Poisson, Helmholtz, wave, diffusion, Schrödinger equation and so on) is classified into hyperbolic, parabolic and elliptic equations. The finite difference method and the finite elements method for the solution are outlined.
The numerical solution of parabolic partial differential equations in Chapter 18 is focused on the standard heat equation because many of the parabolic equations can be transformed into these equations. Starting with the derivation of the one- and multi-dimensional heat equations explicit and implicit schemes of finite difference methods including convergence results are derived. Crank-Nicolson methods and the alternating direction implicit (ADI) procedure including the solution of the corresponding difference equation by iterative algorithms are considered.
Hyperbolic differential equations are treated in Chapter 19. Numerous physical problems can be formulated by the wave equation. Thus, this hyperbolic equation is the central point in this chapter. After the derivation of the one-dimensional second-order wave equation this equation is rewritten as a system of first-order equations. Finite difference methods (explicit, implicit, Lax-Wendroff, leapfrog method) are presented for these equations including convergence conditions (Courant-Friedrichs-Lewy’s condition, CFL) and a scheme for second-order equations. Other topics are the analytic solution by D’Alembert, the method of characteristics, and the consideration of two- and three-dimensional problems.
Methods for elliptic partial differential equations are presented in Chapter 20 with a concentration on Laplace, Poisson, and Helmholtz equations. Several finite difference methods, such as the five-point formula for Laplace 2D, seven-point formula for Laplace 3D, five-point formula for Poisson and Helmholtz 2D, and so on, are treated. Special coordinate systems and irregular grids are included. The finite element method is demonstrated for the Poisson equation.
Chapter 21 gives an introduction to parallel computing and programming. The reader will find short descriptions of Flynn’s classification scheme for parallel computer architectures, of shared and distributed memory systems, parallel programming techniques, such as the message passing and the data parallel model, and parallel programming languages such as HIGH PERFORMANCE FORTRAN. A number of step-by-step procedures for basic numerical operations in parallel computing (e.g. integration, interpolation, Gaussian elimination) are presented.
The interesting Chapter 22 is devoted to neurocomputing, based on the brain’s cognitive processes, an alternative model to get numerical solutions. The mathematical model of an artificial neural network, its basic component, the artificial neuron, the activation functions of a neuron, and the learning process to aquire knowledge that is stored in the synapses are introduced. A number of special networks is described: the multi-layer perception network, the radial basis function network, and several Hopfield networks. Methods for the numerical differentiation and integration for linear and non-linear equations interpolation and extrapolation, and ordinary differential equations, based on the networks, are presented.
The last chapter is about the solution of difference equations. A relation or equation connecting a number of consecutive elements of a sequence is to be solved. The round-off errors of the forward and backward recurrences require an accurate numerical analysis. The problems are demonstrated by examples such as the computation of a real root of a polynomial and a second-order boundary value problem.
The monograph gives a diversified insight into important numerical methods and is well-qualified to help the reader in the development of her or his own codes. But, an addition of hints to advanced numerical methods and special software packages could be helpful for prospective modern engineers.

MSC:

65-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to numerical analysis
65Fxx Numerical linear algebra
65Lxx Numerical methods for ordinary differential equations
65Mxx Numerical methods for partial differential equations, initial value and time-dependent initial-boundary value problems
65Nxx Numerical methods for partial differential equations, boundary value problems
65Yxx Computer aspects of numerical algorithms
65D05 Numerical interpolation
65D07 Numerical computation using splines
65D30 Numerical integration
65G50 Roundoff error
65H04 Numerical computation of roots of polynomial equations
65H05 Numerical computation of solutions to single equations
65H10 Numerical computation of solutions to systems of equations
92B20 Neural networks for/in biological studies, artificial life and related topics
97M50 Physics, astronomy, technology, engineering (aspects of mathematics education)
00A06 Mathematics for nonmathematicians (engineering, social sciences, etc.)
65C05 Monte Carlo methods
65R20 Numerical methods for integral equations

Keywords:

number systems; finite-precision arithmetic; numerical methods; algebraic and transcendental equations; numerical linear algebra; systems non-linear of equations; numerical differentiation and integration; initial and boundary value problems; ordinary differential equations; parabolic, hyperbolic and elliptic differential equations; finite difference methods; finite element methods; parallel computing; neurocomputing; programming languages; monograph; truncation error; rounding error; residual error; forward and backward error analysis; error bounds; BASIC; PASCAL; FORTRAN; C; C++; COBOL; MATLAB; MATHEMATICA; bisection; regula falsi; Ridders method; iterative method; convergence; error estimate; methods of Aitken, Newton-Raphson, Kizner and Brent; roots of polynomial equations; Horner scheme; Lehmer’s method; Bairstov-Hitchcook method; complex roots; Laguerre’s and Müller’s algorithm; Bernoulli’s procedure; Graeffe’s root-squaring method; quotient-difference algorithm; stability analysis; direct methods; Gaussian elimination; condition number; iterative refinement; factorization modifications of Crout and Doolittle; Wilkinson algorithm; Gauss-Jordan method; Cholesky factorization; BLAS; LAPACK; matrix inversion; partial pivoting; iterative methods; sparse matrices; Jacobi; Gauss-Seidel; successive over-relaxation; Krylov subspace algorithms; GMRES; CG; BiCGSTAB; preconditioning; multigrid methods; linear least-squares problems; \(QR\) decomposition; Gram-Schmidt; Householder; Givens \(QR\) factorization; Seidel iteration; Newton’s method; Newton-Raphson method; matrix eigenvalue problem; Leverrier’s algorithm; Danilewsky’s method; inverse power method; Rayleigh quotient iteration; Jacobi method; \(QL\) algorithm; Rütishauser’s \(LU\) method; interpolation; extrapolation; Lagrangian interpolation method; Aitken-Neville’s method; Newton’s divided difference interpolation; Stirling’s, and Bessel’s formula; Hermite and Chebyshev interpolation; Simpson’s rule; Romberg’s method; Gaussian quadrature; spline integration; Monte Carlo methods; integral equations; Taylor series; Picard’s method; Runge-Kutta method; consistency; step size control; multi-step methods; predictor-corrector algorithms; Milne-Simpson; Hamming; Adams-Bashforth-Moulton method; stability; Numerov’s method; Störmer’s rule; shooting methods; collocation method; Galerkin method; Rayleigh-Ritz method; Laplace; Poisson; Helmholtz; wave; diffusion; Schrödinger equation; heat equation; Crank-Nicolson methods; alternating direction implicit procedure; wave equation; Lax-Wendroff; Courant-Friedrichs-Lewy’s condition; method of characteristics; artificial neural network; Hopfield networks
PDFBibTeX XMLCite