Numerical methods for stochastic processes.

*(English)*Zbl 0822.60003
Wiley Series in Probability and Mathematical Statistics. Applied Probability and Statistics. New York: Wiley. xvii, 359 p. (1994).

From the preface: “The main motivation for this book is to give greater rigor to numerical treatments of stochastic models. This […] extends the domain of application of mathematics and allows proofs that are an important preliminary to implementation on computers. […] The book nevertheless remains introductory and presents the current state of the main methods and ideas, the cases for which they have been proved, and the boundary areas where they work less well and need improvement.” This is a fairly accurate description of the book, making clear that its main thrust is theoretical rather than application-oriented.

The book is divided into five chapters: Preliminaries, Computation of expectations in finite dimension, Simulation of random processes, Deterministic resolution of some Markovian problems, Stochastic differential equations and Brownian functionals. Chapter 1 is a summary of notations and basic results from analysis and probability theory. Chapter 2 discusses in detail the numerical computation of expectations of random variables on a finite-dimensional space. There are two main directions to proceed. A Monte Carlo method starts from a sequence \((F_ n)\) of i.i.d. random variables with distribution \(\nu\) and approximates \(E[g(F_ 1)] = \int gd\nu\) by the average \({1\over N} \sum^ N_{n=1} g(F_ n)\), the theoretical justification being the strong law of large numbers. In practice, one has a sequence of pseudo-random numbers \((u_ n)\) between 0 and 1, and one acts as if this were a sample from a sequence \((U_ n)\) of independent random variables distributed uniformly on [0,1]. To generate \(F_ n\) from \(U_ n\) or \(u_ n\), one uses a suitable transformation depending on the distribution \(\nu\). For quasi- Monte Carlo methods, one first introduces the notion of a uniformly distributed sequence; this is a sequence \((\xi_ n)\) of numbers in \([0,1]^ s\) such that the corresponding empirical distribution \(\mu_ N = {1\over N} \sum^ N_{n = 1} \delta_{\xi_ n}\) converges weakly to the Lebesgue measure \(\lambda^ s\) on \([0,1]^ s\). The discrepancy \(D^*_ \infty (\xi, N)\) measures the distance between \(\lambda^ s\) and \(\mu_ N\), and the Koksma-Hlawka inequality states that \[ \left | \int g(x) \lambda^ s (dx) - \int g(x) \mu_ N (dx)\right | \leq \text{Var}(g) D^*_ \infty (\xi, N) \] for a function \(g\) of finite variation. To approximate \(\int g d\lambda^ s\), one is thus interested in sequences of low discrepancy, and the authors provide several classes of examples. Quasi-Monte Carlo methods are theoretically reliable, usually faster in low dimensions and do not require a stochastic stopping criterion, but their usefulness in practice deteriorates very rapidly as the dimension \(s\) of the problem at hand increases. The chapter concludes with a discussion of the numerical computation of conditional expectations. One algorithm uses approximations by polynomials, while a second is based on the martingale convergence theorem.

Chapter 3 deals with the simulation of random processes. This usually involves functionals on infinite-dimensional (path) spaces and thus requires new concepts and methods. In particular, the quasi-Monte Carlo approach does not work any more since the discrepancy of a sequence in \([0,1]^ \mathbb{N}\) will typically not converge to 0. As the preface says, one main point of the book is the introduction and systematic use of the shift method in infinite-dimensional simulation problems. The shift method is based on the poinwise ergodic theorem: for a stationary ergodic process \((X_ n)\) with shift operator \(\theta\), the averages \({1\over N} \sum^{N-1}_{n=0} G(X_ \bullet) \circ \theta^ n\) converge almost surely to \(E [G(X_ \bullet)]\). There is no generally valid rate of convergence, but the authors characterize classes of functions for which there is a law of the iterated logarithm. The next section gives an overview of the \({\mathcal L}^ 2\)-theory for stationary processes and fields, together with a subsection on discretizations and on approximations by processes that can be simulated. The section on Markov processes primarily discusses the simulation of Brownian motion before it hits a hyperplane and points out the connection to conditional diffusions. Processes with independent increments are treated because of their importance in model building; the authors here emphasize the usefulness of the Bochner subordination principle for applications to simulations. The chapter ends with a very brief introduction to point processes.

Chapter 4 is concerned with analytic methods for solving stochastic problems in the framework of Markov processes “where simulation methods should be used only as a last resort”. Topics include the balayage algorithm, the réduite and its connection to optimal stopping, and the carré du champ operator. The authors “lay out the more useful tools for calculation without too much explanation”, and so the chapter is more an overview than a thorough introduction.

Finally, Chapter 5 deals with stochastic differential equations and Brownian functionals. There is a short treatment of the Euler scheme for the numerical solution of SDEs and a more detailed discussion of the irregularity (in the space variable \(\omega\)) of Brownian functionals and its consequences for simulation. The authors then propose a general definition of simulatable functionals on a Polish space and illustrate this idea for the case of the Wiener space. This approach may turn out to be useful for future developments. A link to the existing literature on numerical methods for SDEs is provided by the section on symbolic expansions of solutions to SDEs. Working at a merely formal level, the authors show how to develop certain Brownian functionals in terms of the Wiener chaos decomposition; this is related to higher order numerical methods for SDEs. The final section deals with the shift method applied to some Brownian functionals by writing the Wiener space as a suitable product space, an approach which is also useful for theoretical developments.

While the authors do not claim to be comprehensive, the book provides a rather broad introduction to the area indicated by its title. Some of the topics which are not discussed include variance reduction, stochastic approximation and control, steady-state simulation, more refined methods for discretizing SDEs, numerical aspects of nonlinear filtering or linear algebra methods for large Markov chains. The style of the book is rather abstract, with much more emphasis on theoretical results than on the practical implementation of algorithms. The exposition makes pleasant reading, even if some sections could benefit from a few more words of motivation for the theory being developed. The placing of figures is at times very unfortunate; some appear 30 or more pages after the topic they belong to. This is, however, a very minor cosmetic drawback to a book which on the whole offers a welcome contribution to this important and still growing field of research and application.

The book is divided into five chapters: Preliminaries, Computation of expectations in finite dimension, Simulation of random processes, Deterministic resolution of some Markovian problems, Stochastic differential equations and Brownian functionals. Chapter 1 is a summary of notations and basic results from analysis and probability theory. Chapter 2 discusses in detail the numerical computation of expectations of random variables on a finite-dimensional space. There are two main directions to proceed. A Monte Carlo method starts from a sequence \((F_ n)\) of i.i.d. random variables with distribution \(\nu\) and approximates \(E[g(F_ 1)] = \int gd\nu\) by the average \({1\over N} \sum^ N_{n=1} g(F_ n)\), the theoretical justification being the strong law of large numbers. In practice, one has a sequence of pseudo-random numbers \((u_ n)\) between 0 and 1, and one acts as if this were a sample from a sequence \((U_ n)\) of independent random variables distributed uniformly on [0,1]. To generate \(F_ n\) from \(U_ n\) or \(u_ n\), one uses a suitable transformation depending on the distribution \(\nu\). For quasi- Monte Carlo methods, one first introduces the notion of a uniformly distributed sequence; this is a sequence \((\xi_ n)\) of numbers in \([0,1]^ s\) such that the corresponding empirical distribution \(\mu_ N = {1\over N} \sum^ N_{n = 1} \delta_{\xi_ n}\) converges weakly to the Lebesgue measure \(\lambda^ s\) on \([0,1]^ s\). The discrepancy \(D^*_ \infty (\xi, N)\) measures the distance between \(\lambda^ s\) and \(\mu_ N\), and the Koksma-Hlawka inequality states that \[ \left | \int g(x) \lambda^ s (dx) - \int g(x) \mu_ N (dx)\right | \leq \text{Var}(g) D^*_ \infty (\xi, N) \] for a function \(g\) of finite variation. To approximate \(\int g d\lambda^ s\), one is thus interested in sequences of low discrepancy, and the authors provide several classes of examples. Quasi-Monte Carlo methods are theoretically reliable, usually faster in low dimensions and do not require a stochastic stopping criterion, but their usefulness in practice deteriorates very rapidly as the dimension \(s\) of the problem at hand increases. The chapter concludes with a discussion of the numerical computation of conditional expectations. One algorithm uses approximations by polynomials, while a second is based on the martingale convergence theorem.

Chapter 3 deals with the simulation of random processes. This usually involves functionals on infinite-dimensional (path) spaces and thus requires new concepts and methods. In particular, the quasi-Monte Carlo approach does not work any more since the discrepancy of a sequence in \([0,1]^ \mathbb{N}\) will typically not converge to 0. As the preface says, one main point of the book is the introduction and systematic use of the shift method in infinite-dimensional simulation problems. The shift method is based on the poinwise ergodic theorem: for a stationary ergodic process \((X_ n)\) with shift operator \(\theta\), the averages \({1\over N} \sum^{N-1}_{n=0} G(X_ \bullet) \circ \theta^ n\) converge almost surely to \(E [G(X_ \bullet)]\). There is no generally valid rate of convergence, but the authors characterize classes of functions for which there is a law of the iterated logarithm. The next section gives an overview of the \({\mathcal L}^ 2\)-theory for stationary processes and fields, together with a subsection on discretizations and on approximations by processes that can be simulated. The section on Markov processes primarily discusses the simulation of Brownian motion before it hits a hyperplane and points out the connection to conditional diffusions. Processes with independent increments are treated because of their importance in model building; the authors here emphasize the usefulness of the Bochner subordination principle for applications to simulations. The chapter ends with a very brief introduction to point processes.

Chapter 4 is concerned with analytic methods for solving stochastic problems in the framework of Markov processes “where simulation methods should be used only as a last resort”. Topics include the balayage algorithm, the réduite and its connection to optimal stopping, and the carré du champ operator. The authors “lay out the more useful tools for calculation without too much explanation”, and so the chapter is more an overview than a thorough introduction.

Finally, Chapter 5 deals with stochastic differential equations and Brownian functionals. There is a short treatment of the Euler scheme for the numerical solution of SDEs and a more detailed discussion of the irregularity (in the space variable \(\omega\)) of Brownian functionals and its consequences for simulation. The authors then propose a general definition of simulatable functionals on a Polish space and illustrate this idea for the case of the Wiener space. This approach may turn out to be useful for future developments. A link to the existing literature on numerical methods for SDEs is provided by the section on symbolic expansions of solutions to SDEs. Working at a merely formal level, the authors show how to develop certain Brownian functionals in terms of the Wiener chaos decomposition; this is related to higher order numerical methods for SDEs. The final section deals with the shift method applied to some Brownian functionals by writing the Wiener space as a suitable product space, an approach which is also useful for theoretical developments.

While the authors do not claim to be comprehensive, the book provides a rather broad introduction to the area indicated by its title. Some of the topics which are not discussed include variance reduction, stochastic approximation and control, steady-state simulation, more refined methods for discretizing SDEs, numerical aspects of nonlinear filtering or linear algebra methods for large Markov chains. The style of the book is rather abstract, with much more emphasis on theoretical results than on the practical implementation of algorithms. The exposition makes pleasant reading, even if some sections could benefit from a few more words of motivation for the theory being developed. The placing of figures is at times very unfortunate; some appear 30 or more pages after the topic they belong to. This is, however, a very minor cosmetic drawback to a book which on the whole offers a welcome contribution to this important and still growing field of research and application.

Reviewer: M.Schweizer (Berlin)

##### MSC:

60-02 | Research exposition (monographs, survey articles) pertaining to probability theory |

60Gxx | Stochastic processes |

60Jxx | Markov processes |

60H10 | Stochastic ordinary differential equations (aspects of stochastic analysis) |

65C05 | Monte Carlo methods |