On maximum a posteriori estimation with Plug \& Play priors and stochastic gradient descent
DOI10.1007/s10851-022-01134-7arXiv2201.06133OpenAlexW4317379396MaRDI QIDQ6155451
Valentin De Bortoli, Julie Delon, Marcelo Pereyra, Alain Durmus, Rémi Laumont, Andrés Almansa
Publication date: 12 June 2023
Published in: Journal of Mathematical Imaging and Vision (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2201.06133
inverse problemsdenoisingstochastic gradient descentdeblurringinpaintingBayesian imagingPlug and Play
Analysis of algorithms and problem complexity (68Q25) Bayesian inference (62F15) Bayesian problems; characterization of Bayes procedures (62C10) Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26) Numerical optimization and variational techniques (65K10) Computing methodologies for image processing (68U10) Computer science (68-XX) Information and communication theory, circuits (94-XX)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Nonlinear total variation based noise removal algorithms
- The Little Engine that Could: Regularization by Denoising (RED)
- Lectures on convex optimization
- An algorithm for total variation minimization and applications
- Asymptotic bias of stochastic gradient search
- Convergence of a stochastic approximation version of the EM algorithm
- Do stochastic algorithms avoid traps?
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Symmetrizing Smoothing Filters
- A Nonlocal Bayesian Image Denoising Algorithm
- Posterior Expectation of the Total Variation Model: Properties and Experiments
- Proximal Splitting Methods in Signal Processing
- Tweedie’s Formula and Selection Bias
- Accelerating Proximal Markov Chain Monte Carlo by Using an Explicit Stabilized Method
- Applications of a Kushner and Clark lemma to general classes of stochastic algorithms
- Should Penalized Least Squares Regression be Interpreted as Maximum A Posteriori Estimation?
- A Convergent Image Fusion Algorithm Using Scene-Adapted Gaussian-Mixture-Based Denoising
- DeepISP: Toward Learning an End-to-End Image Processing Pipeline
- Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising
- Optimization Methods for Large-Scale Machine Learning
- Efficient Bayesian Computation by Proximal Markov Chain Monte Carlo: When Langevin Meets Moreau
- General results on the convergence of stochastic algorithms
- De-noising by soft-thresholding
- Bayesian Imaging Using Plug & Play Priors: When Langevin Meets Tweedie
- Maximum Likelihood Estimation of Regularization Parameters in High-Dimensional Inverse Problems: An Empirical Bayesian Approach Part I: Methodology and Experiments
- Plug-and-Play Unplugged: Optimization-Free Reconstruction Using Consensus Equilibrium
- Solving inverse problems using data-driven models
- Scalable Bayesian Uncertainty Quantification in Imaging Inverse Problems via Convex Optimization
- Maximum-a-Posteriori Estimation with Bayesian Confidence Regions
- The Bayesian Choice
- Breaking the Curse of Dimensionality with Convex Neural Networks
- Solving Inverse Problems With Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity
- A Review of Image Denoising Algorithms, with a New One
- Learning Maximally Monotone Operators for Image Recovery
- Convex analysis and monotone operator theory in Hilbert spaces
This page was built for publication: On maximum a posteriori estimation with Plug \& Play priors and stochastic gradient descent