Importance sampling correction versus standard averages of reversible MCMCs in terms of the asymptotic variance
From MaRDI portal
Publication:2196543
DOI10.1016/j.spa.2020.05.006zbMath1455.60099arXiv1706.09873OpenAlexW2796367494WikidataQ109744819 ScholiaQ109744819MaRDI QIDQ2196543
Publication date: 3 September 2020
Published in: Stochastic Processes and their Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1706.09873
importance samplingMarkov chain Monte Carloasymptotic variancepseudo-marginal algorithmunbiased estimatordelayed-acceptance
Related Items (5)
Variance bounding of delayed-acceptance kernels ⋮ Ensemble MCMC: accelerating pseudo-marginal MCMC for state space models using the ensemble Kalman filter ⋮ Efficiency of delayed-acceptance random walk metropolis algorithms ⋮ Conditional particle filters with diffuse initial distributions ⋮ Unbiased Inference for Discretely Observed Hidden Markov Model Diffusions
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The pseudo-marginal approach for efficient Monte Carlo computations
- Delayed acceptance particle MCMC for exact inference in stochastic kinetic models
- Establishing some order amongst exact approximations of MCMCs
- On the ergodicity properties of some adaptive MCMC algorithms
- Harris recurrence of Metropolis-within-Gibbs and trans-dimensional Markov chains
- Central limit theorem for additive functionals of reversible Markov processes and applications to simple exclusions
- A note on Metropolis-Hastings kernels for general state spaces
- Geometric ergodicity of Metropolis algorithms
- Renewal theory and computable convergence rates for geometrically erdgodic Markov chains
- Nonlocal Monte Carlo algorithm for self-avoiding walks with fixed endpoints.
- Convergence properties of pseudo-marginal Markov chain Monte Carlo algorithms
- Scalable posterior approximations for large-scale Bayesian inverse problems via likelihood-informed parameter and state reduction
- Uniform ergodicity of the iterated conditional SMC and geometric ergodicity of particle Gibbs samplers
- A vanilla Rao-Blackwellization of Metropolis-Hastings algorithms
- Examples comparing importance sampling and the Metropolis algorithm
- Explicit error bounds for Markov chain Monte Carlo
- Importance Sampling for Stochastic Simulations
- Optimum Monte-Carlo sampling using Markov chains
- Speeding up MCMC by Delayed Acceptance and Data Subsampling
- Importance Sampling in Stochastic Programming: A Markov Chain Monte Carlo Approach
- Markov Chains and Stochastic Stability
- Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms
- Covariance structure of the Gibbs sampler with applications to the comparisons of estimators and augmentation schemes
- Safe and Effective Importance Sampling
- Markov Chains
- Particle Markov Chain Monte Carlo Methods
- Efficient implementation of Markov chain Monte Carlo when using an unbiased likelihood estimator
- On random- and systematic-scan samplers
- Pseudo-marginal Metropolis–Hastings sampling using averages of unbiased estimators
- Coupling and Ergodicity of Adaptive Markov Chain Monte Carlo Algorithms
- Monte Carlo sampling methods using Markov chains and their applications
- SMC2: An Efficient Algorithm for Sequential Analysis of State Space Models
This page was built for publication: Importance sampling correction versus standard averages of reversible MCMCs in terms of the asymptotic variance