×

zbMATH — the first resource for mathematics

Stochastic controls. Hamiltonian systems and HJB equations. (English) Zbl 0943.93002
Applications of Mathematics. 43. New York, NY: Springer. xx, 438 p. (1999).
This book is a unified treatment on the finite horizon optimal control of stochastic differential equations (SDE).
The preface addresses the main questions which motivate the endeavour. They originate from research carried out at Fudan University in 1987-89 about controlling SDE with control dependent diffusions in an optimal way. It was discovered that a second-order adjoint equation had to be introduced and this led to new insights in the relations between necessary and sufficient conditions of optimality, a central theme of this monograph. The rest of the preface presents clearly the contents of the book and could serve as a decent review.
Chapter one is a nice introduction to stochastic calculus starting from the definition of probability spaces and going on with stochastic processes, stopping times, martingales, ItĂ´’s integral, SDE and their strong and weak solutions.
Chapter two presents stochastic optimal control problems. After introducing the deterministic case with applications in economics and management, stochastic optimal control problems are described under a weak and strong formulation. The SDE constraint is such that controls appear in both the drift and the diffusion. Then the authors give sufficient conditions (convexity assumptions) for the existence of a solution in both formulations and they prove their statements. A section shows that the reachable set (a key object when one deals with some existence theorems) of a linear stochastic system may not be convex or closed (in \(L^2(\Omega,\mathbb{R}^n)\); \(\Omega\) is the underlying space of the probability space) in contrast to the deterministic case even if the control set is compact. Other examples of optimal control problems which are studied elsewhere are stated.
Chapter three is devoted to the maximum principle. First, the deterministic case is revisited and Pontryagin’s result is proved via the spike variations and Gronwall’s inequality as usual. It is seen to be also a sufficient condition for optimality under some convexity assumptions using a property of Clarke’s generalized gradient. Then, the stochastic maximum principle is obtained. In addition to the usual adjoint variable solving a backwards SDE (BSDE) and the variable “dual” to the noise, two new matrix variables associated to another BSDE have to be introduced because of the control-dependent diffusion. They represent second-order information which has to be included in the expression of the Hamiltonian for the maximum principle. The proof of the second author [Stochastics Stochastics Rep. 36, No 3/4, 137-161 (1991; Zbl 0756.93087)] (cf. also S. Peng [SIAM J. Control Optimization 28, No. 4, 966-979 (1990; Zbl 0712.93067)]) is provided. The condition becomes also sufficient under assumptions analogous to the ones in the deterministic case (cf. the publication of the second author [IEEE. Trans. Autom. Control 41, No. 8, 1176-1179 (1996; Zbl 0857.93099)]). The chapter ends with a maximum principle for stochastic systems with state constraints.
Chapter four studies the dynamic programming approach. As before, the deterministic case is described first through Bellman’s principle and the Hamilton-Jacobi equation. Viscosity solutions which are a generalization allowing for nonsmooth (but continuous in this book) solutions are defined. Their existence and uniqueness are discussed. A parallel development follows in a stochastic environment and properties of the value function are provided. Some proofs in this chapter are quite technical.
Chapter five, which “is the core of the entire book”, brings together the tools developed by the previous two, i.e., roughly speaking, necessary and sufficient conditions of optimality. Starting with the classical Hamilton-Jacobi theory from analytical mechanics where the Hamiltonian dynamics is recovered from the assumed smooth solution \(V\) to the Hamilton-Jacobi equation and where the converse holds locally, one develops a similar story when controls are present. In both cases, the equivalence follows from the possibility to express the adjoint variable as \(p={\partial V\over\partial x}\) or nonsmooth versions via viscosity solutions. The stochastic generalization is developed next. As pointed out in chapter four, one has to use an extended Hamiltonian, the Hamilton-Jacobi equation contains extra terms and the authors develop a theory in the smooth and nonsmooth cases. Consequences are drawn concerning optimal feedback synthesis. The philosophy of this chapter follows from the thesis of the second author (1988).
Chapter six deals with the linear quadratic (LQ) optimal control problem. First, the classical deterministic case is recalled. Using a brute force functional analytic method and a suitable manipulation of the cost, the problem can be reformulated as a parametric minimization in the Hilbert space of controls; the parameters here are the initial time and the state vectors. But the solution involves the inversion of a complicated operator. This helps evaluating the usefulness of the common approach via the maximum principle. The answer to the problem is parametrized by the solution to the celebrated Riccati equation which can also be obtained via dynamic programming or using a “completion of the square” method. Next, the stochastic LQ problem in the weak formulation is stated. The cost is \[ J(s,y,u(.))= E\Biggl(\int^T_s [(Qx,x)+ 2(sx,u)+ (Ru,u)] dt+\textstyle{{1\over 2}}\displaystyle{(G(x(T),x(T)))\Biggr)} \] and the constraint is the SDE: \[ dx= [A(t)x+ B(t)u+ b(t)]dt+ [C(t)x+ D(t)u+ \sigma(t)]dW \] with \(x(s)= y\). \(Q(t)\), \(R(t)\), \(G(t)\) are symmetric of suitable size but not necessarily definite. This situation arises in a multiobjective context. The diffusion being control-dependent, decision making may increase the volatility and the risk taken. In the multiobjective case, this leads to a meaningful trade-off: control energy has to increase to realize a certain goal but the risk is increased due to the drift and one may be exposed to not being able to realize another goal. Existence of a solution to this problem is characterized via an analog to the previous parametric minimization approach. The Hamiltonian system is introduced next by applying the results of chapter three and necessary conditions of optimality leading to a control solution are derived. It can be re-expressed and parametrized by the solution to a Riccati-type equation called “the stochastic Riccati equation”: \[ \dot P+ PA+ A^t P+ C^t PC+ Q= \Lambda^t(R+ D^t PD)^{-1}\Lambda, \] \[ \Lambda= B^t P+ S+ D^t PC \] and \(P(T)= G\), \(R+ D^tPD> 0\). This equation can also be obtained using dynamic programming or a “completing the square” method. Its global solvability is investigated next in some cases following studies of the authors and other researchers. A final section considers a mean-variance portfolio selection problem which illustrates the above-mentioned trade-off.
The last chapter is devoted to studying BSDEs. They arise as evolution equations for the adjoint variable in the stochastic maximum principle. Adapted solutions are defined in the linear and nonlinear cases. Conditions for their existence and uniqueness are given and proved using functional analytic methods. BSDE can be used as a way of representing solutions to some partial differential equations (Hamilton-Jacobi equation typically) via Feynman-Kac type formulae. Another section deals with Forward-Backward SDEs and the associated “four-step scheme”. This is applied to some option pricing problems.
More than thirty pages of references (!) and an index end this very well written book. Good examples and motivating applications (taken mainly from economics and finance; the Nobel prize-winning contributions of Markowitz, Black, Scholes are put in perspective) can be found along the text. The chapters end with very valuable historical information concerning the emergence and the evolution of the topics treated. This is where the bibliographical references are cited.
The authors succeed in a masterful way in showing the interplay between necessary and sufficient conditions for optimality, pointing out when and why the equivalence holds (chapters two, three, four). In chapter five which focuses on a special case, this perspective is brought again. In those chapters, the authors start from the simpler deterministic case and then generalize to the stochastic situation. This allows to point out differences but the main methods remain parallel. This provides a unifying view on the topic. Those chapters describe stochastic control per se while the first and last chapters are involved in stochastic analysis through an introduction making the volume self-contained and through advanced technical aspects. So, one comes back to the initial “stochastic waters” after having asperged with them the world of optimization (which has enriched them). This excellent and superb book unifies in a harmonious and structured way building blocks of stochastic optimal control while fitting in important contributions of the authors.
If the reviewer had to buy one book on stochastic control, it would be this one.

MSC:
93-02 Research exposition (monographs, survey articles) pertaining to systems and control theory
93E20 Optimal stochastic control
49L25 Viscosity solutions to Hamilton-Jacobi equations in optimal control and differential games
91G80 Financial applications of other theories
49L20 Dynamic programming in optimal control and differential games
49N10 Linear-quadratic optimal control problems
49J55 Existence of optimal solutions to problems involving randomness
49K45 Optimality conditions for problems involving randomness
60G35 Signal detection and filtering (aspects of stochastic processes)
60H10 Stochastic ordinary differential equations (aspects of stochastic analysis)
PDF BibTeX XML Cite