Proximal Denoiser for Convergent Plug-and-Play Optimization with Nonconvex Regularization
From MaRDI portal
Publication:6389722
arXiv2201.13256MaRDI QIDQ6389722
Author name not available (Why is that?)
Publication date: 31 January 2022
Abstract: Plug-and-Play (PnP) methods solve ill-posed inverse problems through iterative proximal algorithms by replacing a proximal operator by a denoising operation. When applied with deep neural network denoisers, these methods have shown state-of-the-art visual performance for image restoration problems. However, their theoretical convergence analysis is still incomplete. Most of the existing convergence results consider nonexpansive denoisers, which is non-realistic, or limit their analysis to strongly convex data-fidelity terms in the inverse problem to solve. Recently, it was proposed to train the denoiser as a gradient descent step on a functional parameterized by a deep neural network. Using such a denoiser guarantees the convergence of the PnP version of the Half-Quadratic-Splitting (PnP-HQS) iterative algorithm. In this paper, we show that this gradient denoiser can actually correspond to the proximal operator of another scalar function. Given this new result, we exploit the convergence theory of proximal algorithms in the nonconvex setting to obtain convergence results for PnP-PGD (Proximal Gradient Descent) and PnP-ADMM (Alternating Direction Method of Multipliers). When built on top of a smooth gradient denoiser, we show that PnP-PGD and PnP-ADMM are convergent and target stationary points of an explicit functional. These convergence results are confirmed with numerical experiments on deblurring, super-resolution and inpainting.
Has companion code repository: https://github.com/samuro95/prox-pnp
This page was built for publication: Proximal Denoiser for Convergent Plug-and-Play Optimization with Nonconvex Regularization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6389722)