Inverse problems. Tikhonov theory and algorithms (Q2874602)

From MaRDI portal





scientific article; zbMATH DE number 6327847
Language Label Description Also known as
English
Inverse problems. Tikhonov theory and algorithms
scientific article; zbMATH DE number 6327847

    Statements

    0 references
    0 references
    8 August 2014
    0 references
    nonlinear operator
    0 references
    elliptic inverse problems
    0 references
    tomography
    0 references
    Tikhonov regularization
    0 references
    source condition
    0 references
    nonlinearity condition
    0 references
    a priori parameter choice
    0 references
    a posteriori parameter choice
    0 references
    Banach space
    0 references
    nonsmooth optimization
    0 references
    sparsity optimization
    0 references
    direct inversion methods
    0 references
    Bayesian inference
    0 references
    monograph
    0 references
    convergence
    0 references
    Bayesian inversion
    0 references
    algorithm
    0 references
    augmented Lagrangian method
    0 references
    semi-smooth Newton method
    0 references
    Inverse problems. Tikhonov theory and algorithms (English)
    0 references
    The primary goal of this monograph is to blend up-to-date mathematical theory with state-of-art numerical algorithms for Tikhonov regularization. The main focus lies in nonsmooth regularization methods and their convergence analysis, parameter choice rules, convergence estimates for nonlinear problems, direct inversion methods, and Bayesian inversion. The presentation focuses on two components of applied inverse theory: mathematical theory (linear and nonlinear Tikhonov algorithms) and numerical algorithms including nonsmooth optimization technique. The nonsmoothness of the emerging models poses significant challenges to their efficient and accurate numerical solution. The authors describe a number of efficient algorithms for relevant nonsmooth optimization problems, e.g., the augmented Lagrangian method and the semi-smooth Newton method. The sparcity regularization is treated in great detail.NEWLINENEWLINEThe book consists of an introduction (Chapter 1) and six chapters.NEWLINENEWLINEChapter 2 ``Models in inverse problems'' describes several mathematical models for linear and nonlinear inverse problems arising in diverse practical applications. The examples mostly related to medium inverse problems focus on practical problems modeled by partial differential equations (PDEs), where the input data consist of indirect measurements of the PDEs solutions. These models are used throughout the book to illustrate the underlying ideas of the theory and algorithms.NEWLINENEWLINEIn Chapter 3 ``Tikhonov theory for linear problems'' the authors present the Tikhonov regularization technique for linear inverse problems \(Ku=g^{\dag}\), where \(K:X \to Y\) is a bounded linear operator acting between Banach spaces \(X\) and \(Y\). The accuracy of noisy data \(g^{\delta}\) with respect to the exact data \(g^{\dag}=Ku^{\dag}\) with the true solution \(u^{\dag}\) is quantified in an error metric \(\phi\), mostly the authors are concerned with \(\phi(u,g^{\delta})=\| Ku-g^{\delta}\|^p\), \(p>0\). In the Tikhonov method, the parametrized family of optimization problems \(\min_{u \in C} \{ J_{\alpha}(u)=\phi(u,g^{\delta})+\alpha \psi(u) \}\) \((\alpha>0)\) is solved. Here, \(C \subset X\) is a closed and convex set of a priori constraints on the solution \(u^{\dag}\), and \(\psi(u)\) is the regularization term.NEWLINENEWLINEChapter 4 ``Tikhonov theory for nonlinear inverse problems'' is devoted to the application of the Tikhonov regularization technique to operator equations \(K(u)=g^{\dag}\), where the nonlinear operator \(K:X \to Y\) is Fréchet (Gateaux) differentiable and accuracy of noisy data \(g^{\delta}\) is measured by the noise level \(\delta=\| g^{\dag}-g^{\delta}\|\). For the Tikhonov functional \(J_{\alpha}=\| Ku-g^{\delta}\|^p+\alpha \psi(u)\), the authors present classical and recent results on existence and stability of minimizers, convergence rates under different sources and nonlinearity conditions.NEWLINENEWLINEMotivated by the fact that Tikhonov functionals arising in applications are often nondifferentiable, in Chapter 5 ``Nonsmooth optimization'', the authors discuss the optimization theory and methods for a general class of optimization problems \(\min_{x\in C} J(x)=F(x)+\alpha \psi(\Lambda x)\), subject to the operator equality constraint \(E(x)=0\).NEWLINENEWLINEChapter 6 ``Direct inversion methods'' is devoted to the technique for extracting essential characteristics of inverse solutions, e.g., location and size of the inclusions. The obtained approximations can serve as initial guesses for Tikhonov regularization methods.NEWLINENEWLINEChapter 7 ``Bayesian inference'' deals with the question: ``How plausible is the Tikhonov minimizer?''. The authors develop an effective use of Bayesian inference in the context of inverse problems, i.e., incorporation of model uncertainties, measurement noises and approximation errors in the posterior distribution, the evaluation of the effectiveness of the prior information, and the selection of regularization parameters and proper mathematical models.NEWLINENEWLINEThe volume can be read by anyone with a basic knowledge of functional analysis and probability theory. A large amount of examples where abstract theorems and algorithms are specialized for applied problems makes the book suitable for graduate courses on applied mathematics and inverse problems.
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references