×

Hybrid deterministic-stochastic gradient Langevin dynamics for Bayesian learning. (English) Zbl 1329.60260

Summary: We propose a new algorithm to obtain Bayesian posterior distribution by a hybrid deterministic-stochastic gradient Langevin dynamics. To speed up convergence and reduce computational costs, it is common to use stochastic gradient method to approximate the full gradient by sampling a subset of the large dataset. Stochastic gradient methods make progress fast initially, however, they often become slow in the late stage as the iterations approach the desired solution. The conventional gradient methods converge better eventually however at the expense of evaluating the full gradient at each iteration. Our hybrid method has the advantages of both approaches for constructing the Bayesian posterior distribution. We prove that our algorithm converges based on the weak convergence methods, and illustrate numerically its effectiveness and improved accuracy.

MSC:

60J22 Computational methods in Markov chains
62F15 Bayesian inference
60H10 Stochastic ordinary differential equations (aspects of stochastic analysis)
60H35 Computational methods for stochastic equations (aspects of stochastic analysis)
65C40 Numerical analysis or methods applied to Markov chains
65C30 Numerical solutions to stochastic differential and integral equations
PDFBibTeX XMLCite
Full Text: DOI