High-dimensional Bayesian inference via the unadjusted Langevin algorithm

From MaRDI portal
Publication:2325343

DOI10.3150/18-BEJ1073zbMATH Open1428.62111arXiv1605.01559OpenAlexW2562674776MaRDI QIDQ2325343

Author name not available (Why is that?)

Publication date: 25 September 2019

Published in: (Search for Journal in Brave)

Abstract: We consider in this paper the problem of sampling a high-dimensional probability distribution pi having a density with respect to the Lebesgue measure on mathbbRd, known up to a normalization constant xmapstopi(x)=mathrmeU(x)/intmathbbRdmathrmeU(y)mathrmdy. Such problem naturally occurs for example in Bayesian inference and machine learning. Under the assumption that U is continuously differentiable, ablaU is globally Lipschitz and U is strongly convex, we obtain non-asymptotic bounds for the convergence to stationarity in Wasserstein distance of order 2 and total variation distance of the sampling method based on the Euler discretization of the Langevin stochastic differential equation, for both constant and decreasing step sizes. The dependence on the dimension of the state space of these bounds is explicit. The convergence of an appropriately weighted empirical measure is also investigated and bounds for the mean square error and exponential deviation inequality are reported for functions which are measurable and bounded. An illustration to Bayesian inference for binary regression is presented to support our claims.


Full work available at URL: https://arxiv.org/abs/1605.01559



No records found.


No records found.








This page was built for publication: High-dimensional Bayesian inference via the unadjusted Langevin algorithm

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2325343)