×

Trajectorial dissipation and gradient flow for the relative entropy in Markov chains. (English) Zbl 1491.60129

Summary: We study the temporal dissipation of variance and relative entropy for ergodic Markov Chains in continuous time, and compute explicitly the corresponding dissipation rates. These are identified, as is well known, in the case of the variance in terms of an appropriate Hilbertian norm; and in the case of the relative entropy, in terms of a Dirichlet form which morphs into a version of the familiar Fisher information under conditions of detailed balance. Here we obtain trajectorial versions of these results, valid along almost every path of the random motion and most transparent in the backwards direction of time. Martingale arguments and time reversal play crucial roles, as in the recent work of Karatzas, Schachermayer and Tschiderer for conservative diffusions. Extensions are developed to general “convex divergences” and to countable state-spaces. The steepest descent and gradient flow properties for the variance, the relative entropy, and appropriate generalizations, are studied along with their respective geometries under conditions of detailed balance, leading to a very direct proof for the HWI inequality of Otto and Villani in the present context.

MSC:

60J27 Continuous-time Markov processes on discrete state spaces
60G44 Martingales with continuous parameter
60H30 Applications of stochastic analysis (to PDEs, etc.)
60J35 Transition functions, generators and resolvents
94A17 Measures of information, entropy
PDFBibTeX XMLCite
Full Text: DOI arXiv