Thresholding gradient methods in Hilbert spaces: support identification and linear convergence
DOI10.1051/cocv/2019011OpenAlexW2963991544MaRDI QIDQ5109200
Silvia Villa, Lorenzo Rosasco, Guillaume Garrigos
Publication date: 11 May 2020
Published in: ESAIM: Control, Optimisation and Calculus of Variations (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1712.00357
Convex programming (90C25) Optimality conditions and duality in mathematical programming (90C46) Numerical solutions to equations with linear operators (65J10) Numerical solutions to equations with nonlinear operators (65J15) Numerical solutions of ill-posed problems in abstract spaces; regularization (65J20) Numerical solution to inverse problems in abstract spaces (65J22) Numerical methods for variational inequalities and related problems (65K15)
Related Items (6)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A necessary and sufficient condition for exact sparse recovery by \(\ell_1\) minimization
- Linear convergence of iterative soft-thresholding
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- Elastic-net regularization in learning theory
- Partially finite convex programming. I: Quasi relative interiors and duality theory
- A family of functional inequalities: Łojasiewicz inequalities and displacement convex functions
- From error bounds to the complexity of first-order descent methods for convex functions
- Maximal solutions of sparse analysis regularization
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Global error bounds for piecewise convex polynomials
- ``Active-set complexity of proximal gradient: how long does it take to find the sparsity pattern?
- Splitting methods with variable metric for Kurdyka-Łojasiewicz functions and general convergence rates
- Linear convergence of first order methods for non-strongly convex optimization
- Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry
- Nonparametric sparsity and regularization
- Convex Optimization in Normed Spaces
- Activity Identification and Local Linear Convergence of Forward--Backward-type Methods
- Evolution equations for maximal monotone operators: asymptotic analysis in continuous and discrete time
- Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
- Proximal Thresholding Algorithm for Minimization over Orthonormal Bases
- Sensitivity Analysis for Mirror-Stratifiable Convex Functions
- Regularized learning schemes in feature Banach spaces
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
- Regularization and Variable Selection Via the Elastic Net
- Convergence Rate Analysis of Several Splitting Schemes
- Sparse spikes super-resolution on thin grids II: the continuous basis pursuit
- Convex Analysis
- Convex analysis and monotone operator theory in Hilbert spaces
This page was built for publication: Thresholding gradient methods in Hilbert spaces: support identification and linear convergence