An Acceleration Strategy for Randomize-Then-Optimize Sampling Via Deep Neural Networks
From MaRDI portal
Publication:5079536
DOI10.4208/jcm.2102-m2020-0339zbMath1499.62346arXiv2104.06285OpenAlexW3209989675MaRDI QIDQ5079536
Publication date: 27 May 2022
Published in: Journal of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2104.06285
Artificial neural networks and deep learning (68T07) Probabilistic models, generic numerical methods in probability and statistics (65C20) Monte Carlo methods (65C05) Inverse problems for PDEs (35R30) Neural nets and related approaches to inference from stochastic processes (62M45)
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo
- A random map implementation of implicit filters
- Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian inverse problems
- Stochastic spectral methods for efficient Bayesian solution of inverse problems
- Dimensionality reduction and polynomial chaos acceleration of Bayesian inference in inverse problems
- Geometric MCMC for infinite-dimensional inverse problems
- Statistical and computational inverse problems.
- Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification
- Adaptive multi-fidelity polynomial chaos approach to Bayesian inference in inverse problems
- Bayesian Calibration of Computer Models
- Inverse problems: A Bayesian perspective
- A Stochastic Newton MCMC Method for Large-Scale Statistical Inverse Problems with Application to Seismic Inversion
- Data-driven model reduction for the Bayesian solution of inverse problems
- Parameter and State Model Reduction for Large-Scale Statistical Inverse Problems
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Approximation errors and model reduction with an application in optical diffusion tomography
- Handbook of Markov Chain Monte Carlo
- Posterior consistency for Gaussian process approximations of Bayesian posterior distributions
- Convergence analysis of surrogate-based methods for Bayesian inverse problems
- Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods
- AN ADAPTIVE MULTIFIDELITY PC-BASED ENSEMBLE KALMAN INVERSION FOR INVERSE PROBLEMS
- An Adaptive Surrogate Modeling Based on Deep Neural Networks for Large-Scale Bayesian Inverse Problems
- Stochastic Collocation Algorithms Using $l_1$-Minimization for Bayesian Solution of Inverse Problems
- Non‐linear model reduction for uncertainty quantification in large‐scale inverse problems
- Bayesian Inverse Problems with $l_1$ Priors: A Randomize-Then-Optimize Approach
This page was built for publication: An Acceleration Strategy for Randomize-Then-Optimize Sampling Via Deep Neural Networks