Applied interval analysis. With examples in parameter and state estimation, robust control and robotics. Incl. 1 CD-ROM.

*(English)*Zbl 1023.65037
London: Springer. xvi, 379 p. (2001).

In most engineering problems, we look for a design that satisfies certain constraints. Often, in addition to these constraints, we have a well-defined objective function, and we want to find, among all possible designs, the one for which the value of this objective functions is the largest (or the smallest).

In mathematical terms, constraints on the parameters \(x_1,\ldots,x_n\) are usually represented by equalities and/or inequalities; thus, the design problems can be formulated as constraint satisfaction problems or as constraint optimization problems. There exist a lot of numerical algorithms for solving these problems. For these algorithms, we usually only know the rate with which they converge, but for a real-life application, when we stop the algorithm after a certain number of iterations, we rarely know how close we are to satisfying the constraints and/or to the global maximum. In many practical applications, it is desirable to know how close we are to the maximum and whether there are maxima that we have not found. Indeed, if additional computations can drastically improve the objective function and thus lead to a substantial benefit, it is worth pursuing; otherwise, it is not worth wasting computer time on looking for a miniscule improvement. In other words, we need numerical methods with automatic result verification.

For example, in 1D case, in addition to the approximate value \(\widetilde x\) of the location \(x\) of the desired global maximum, we want to know the accuracy \(\Delta\) of this approximation. Once we know \(\Delta\), this means that the actual (unknown) value \(x\) is somewhere in the interval \([\widetilde x-\Delta,\widetilde x+\Delta]\). Some other numerical method may provide us with a different approximate value \(x'\) and different bounds on possible negative and positive approximation errors that result in the same interval of possible values of \(x\). From this viewpoint, the only information we get about \(x\) is this interval – and which value from this interval the algorithm produced as an approximate value does not matter. Our objective is thus to produce intervals of possible values. Because of this, methods with automatic result verification are also called interval analysis, or interval computations.

Once we know the intervals of possible values for \(x_1,\ldots,x_n\), we may want to find the intervals of possible values for some characteristics \(y=f(x_1,\ldots,x_n)\) depending on \(x_i\). For several cases, there are explicit formulas that describe the range of \(y\) in terms of ranges of \(x_i\), e.g., in the cases when \(n=2\) and \(f(x_1,x_2)\) is an arithmetic operation (+, \(-\), etc.); these cases form interval arithmetic.

In multi-dimensional case, instead of an interval, we have a range of possible values. Each 1D projection of this range can be described by an interval \(I_i\), and the range itself can be described as a union of finitely many rectangular boxes of the type \(I_1\times \ldots\times I_n\). This representation turns out to be efficient in solving constraint satisfaction and optimization problems: once we have a representation of this type, we can bisect all the boxes, use interval arithmetic to check whether constraints are satisfied anywhere on each subbox, and dismiss subboxes for which this is not true. For minimization, we can do a similar bisection, and dismiss a subbox for which the entire range of the objective function is above the smallest of the already achieved values.

These are just the main ideas. The authors start from the basics, and go from these basic to describe state-of-the-art algorithms for solving systems of nonlinear equations and/or inequalities and for optimization-algorithms with automatic result verification. In Part III, they show how these methods can be applied to parameter estimation, robust control, and robot planning; in Part IV, they show how to efficiently program the corresponding algorithms. The book has many examples. It can serve both as a research monograph and as a graduate textbook (I myself used it when teaching interval computations). My only (minor) warning to readers who want to learn interval analysis from this book is that the authors’ terms are sometimes different from what most interval researchers use.

In mathematical terms, constraints on the parameters \(x_1,\ldots,x_n\) are usually represented by equalities and/or inequalities; thus, the design problems can be formulated as constraint satisfaction problems or as constraint optimization problems. There exist a lot of numerical algorithms for solving these problems. For these algorithms, we usually only know the rate with which they converge, but for a real-life application, when we stop the algorithm after a certain number of iterations, we rarely know how close we are to satisfying the constraints and/or to the global maximum. In many practical applications, it is desirable to know how close we are to the maximum and whether there are maxima that we have not found. Indeed, if additional computations can drastically improve the objective function and thus lead to a substantial benefit, it is worth pursuing; otherwise, it is not worth wasting computer time on looking for a miniscule improvement. In other words, we need numerical methods with automatic result verification.

For example, in 1D case, in addition to the approximate value \(\widetilde x\) of the location \(x\) of the desired global maximum, we want to know the accuracy \(\Delta\) of this approximation. Once we know \(\Delta\), this means that the actual (unknown) value \(x\) is somewhere in the interval \([\widetilde x-\Delta,\widetilde x+\Delta]\). Some other numerical method may provide us with a different approximate value \(x'\) and different bounds on possible negative and positive approximation errors that result in the same interval of possible values of \(x\). From this viewpoint, the only information we get about \(x\) is this interval – and which value from this interval the algorithm produced as an approximate value does not matter. Our objective is thus to produce intervals of possible values. Because of this, methods with automatic result verification are also called interval analysis, or interval computations.

Once we know the intervals of possible values for \(x_1,\ldots,x_n\), we may want to find the intervals of possible values for some characteristics \(y=f(x_1,\ldots,x_n)\) depending on \(x_i\). For several cases, there are explicit formulas that describe the range of \(y\) in terms of ranges of \(x_i\), e.g., in the cases when \(n=2\) and \(f(x_1,x_2)\) is an arithmetic operation (+, \(-\), etc.); these cases form interval arithmetic.

In multi-dimensional case, instead of an interval, we have a range of possible values. Each 1D projection of this range can be described by an interval \(I_i\), and the range itself can be described as a union of finitely many rectangular boxes of the type \(I_1\times \ldots\times I_n\). This representation turns out to be efficient in solving constraint satisfaction and optimization problems: once we have a representation of this type, we can bisect all the boxes, use interval arithmetic to check whether constraints are satisfied anywhere on each subbox, and dismiss subboxes for which this is not true. For minimization, we can do a similar bisection, and dismiss a subbox for which the entire range of the objective function is above the smallest of the already achieved values.

These are just the main ideas. The authors start from the basics, and go from these basic to describe state-of-the-art algorithms for solving systems of nonlinear equations and/or inequalities and for optimization-algorithms with automatic result verification. In Part III, they show how these methods can be applied to parameter estimation, robust control, and robot planning; in Part IV, they show how to efficiently program the corresponding algorithms. The book has many examples. It can serve both as a research monograph and as a graduate textbook (I myself used it when teaching interval computations). My only (minor) warning to readers who want to learn interval analysis from this book is that the authors’ terms are sometimes different from what most interval researchers use.

Reviewer: Vladik Ya.Kreinovich (El Paso)

##### MSC:

65G40 | General methods in interval analysis |

65-02 | Research exposition (monographs, survey articles) pertaining to numerical analysis |

65G20 | Algorithms with automatic result verification |

65G30 | Interval and finite arithmetic |

70E60 | Robot dynamics and control of rigid bodies |

65K05 | Numerical mathematical programming methods |

90C30 | Nonlinear programming |

62F10 | Point estimation |

65C60 | Computational problems in statistics (MSC2010) |

93B30 | System identification |

93D09 | Robust stability |