×

More risk-sensitive Markov decision processes. (English) Zbl 1291.90289

Summary: We investigate the problem of minimizing a certainty equivalent of the total or discounted cost over a finite and an infinite horizon that is generated by a Markov decision process (MDP). In contrast to a risk-neutral decision maker this optimization criterion takes the variability of the cost into account. It contains as a special case the classical risk-sensitive optimization criterion with an exponential utility. We show that this optimization problem can be solved by an ordinary MDP with extended state space and give conditions under which an optimal policy exists. In the case of an infinite time horizon we show that the minimal discounted cost can be obtained by value iteration and can be characterized as the unique solution of a fixed-point equation using a “sandwich” argument. Interestingly, it turns out that in the case of a power utility, the problem simplifies and is of similar complexity than the exponential utility case, however has not been treated in the literature so far. We also establish the validity (and convergence) of the policy improvement method. A simple numerical example, namely, the classical repeated casino game, is considered to illustrate the influence of the certainty equivalent and its parameters. Finally, the average cost problem is also investigated. Surprisingly, it turns out that under suitable recurrence conditions on the MDP for convex power utility, the minimal average cost does not depend on the parameter of the utility function and is equal to the risk-neutral average cost. This is in contrast to the classical risk-sensitive criterion with exponential utility.

MSC:

90C40 Markov and semi-Markov decision processes
91B06 Decision theory
PDFBibTeX XMLCite
Full Text: DOI Link