Necessary conditions for variational regularization schemes (Q2861889)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Necessary conditions for variational regularization schemes |
scientific article; zbMATH DE number 6225042
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Necessary conditions for variational regularization schemes |
scientific article; zbMATH DE number 6225042 |
Statements
Necessary conditions for variational regularization schemes (English)
0 references
11 November 2013
0 references
Ill-posed problem
0 references
inverse problem
0 references
variational regularization
0 references
Ivanov regularization
0 references
Tikhonov regularization
0 references
Morozov regularization
0 references
a priori parameter choice
0 references
discrepancy functional
0 references
topological space
0 references
Bregman distance
0 references
Kullback-Leibler divergence
0 references
stability
0 references
convergence
0 references
The authors study in a general setting variational schemes for ill-posed problems, with a focus on necessary conditions for the Tikhonov method to be a regularization scheme. We present some details, and for this purpose consider a continuous mapping \( F:X \to Y \), where \( (X,\tau_X) \) and \( (Y,\tau_Y) \) are topological spaces. In addition, let \( \rho: Y \times Y\to [0,\infty] \) be a discrepancy functional which may be non-metric, in general, and \( R: X \to [0,\infty] \) denotes a stabilizing functional. The paper starts with a comparison of three variational methods for the regularization of an ill-posed problem \( F(x)= y^{\mathrm{exact}}\): the Tikhonov regularization \( T_{\alpha,y}(x) = \rho(F(x),y) + \alpha R(x) \to \min \), the Ivanov regularization \( \rho(F(x),y) \to \min \) subject to \( R(x) \leq \tau \), and the Morozov regularization \( R(x) \to \min \) subject to \( \rho(F(x),y) \leq \delta \). Here, minimization is always considered with respect to \( x \in X \). In addition \( \alpha > 0 \) and \( \tau > 0 \) holds, and \( \delta > 0 \) denotes the noise level for the right-hand side of \( F(x) = y^{\mathrm{exact}}\). The authors then present a definition for the Tikhonov functional \( T_{\alpha,y} \) to be a regularization scheme. This definition contains existence, stability and convergence conditions. The convergence part makes use of a sequential convergence structure \( \mathcal{S}\) on \( Y \) which maps, by definition, any element \(y \in Y \) to the set of sequences in \( Y \) converging to \( y \). In a theorem, conditions are given which guarantee that the Tikhonov functional \( T_{\alpha,y} \) is a regularization scheme in the sense indicated above. One of those conditions states that convergence \( y_n \to y \) with to respect to \( \mathcal{S} \) is equivalent to \( \rho(y_n,y) \to 0 \), and a continuity condition is also considered. Subsequently, the relations between the topological structure, the sequential convergence structure \( \mathcal{S}\), and the convergence and continuity conditions from the theorem considered above are investigated. Finally, the results are applied to discrepancy functionals \( \rho \) generated by Bregman distances and the Kullback-Leibler divergence, respectively.
0 references