Learning the optimal Tikhonov regularizer for inverse problems
From MaRDI portal
Publication:6370092
arXiv2106.06513MaRDI QIDQ6370092
Author name not available (Why is that?)
Publication date: 11 June 2021
Abstract: In this work, we consider the linear inverse problem , where is a known linear operator between the separable Hilbert spaces and , is a random variable in and is a zero-mean random process in . This setting covers several inverse problems in imaging including denoising, deblurring, and X-ray tomography. Within the classical framework of regularization, we focus on the case where the regularization functional is not given a priori but learned from data. Our first result is a characterization of the optimal generalized Tikhonov regularizer, with respect to the mean squared error. We find that it is completely independent of the forward operator and depends only on the mean and covariance of . Then, we consider the problem of learning the regularizer from a finite training set in two different frameworks: one supervised, based on samples of both and , and one unsupervised, based only on samples of . In both cases, we prove generalization bounds, under some weak assumptions on the distribution of and , including the case of sub-Gaussian variables. Our bounds hold in infinite-dimensional spaces, thereby showing that finer and finer discretizations do not make this learning problem harder. The results are validated through numerical simulations.
Has companion code repository: https://github.com/LearnTikhonov/Code
This page was built for publication: Learning the optimal Tikhonov regularizer for inverse problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6370092)