Analysis and Algorithms for Some Compressed Sensing Models Based on L1/L2 Minimization
From MaRDI portal
Publication:4997175
DOI10.1137/20M1355380zbMath1470.90098arXiv2007.12821MaRDI QIDQ4997175
Liaoyuan Zeng, Ting Kei Pong, Peiran Yu
Publication date: 28 June 2021
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2007.12821
Applications of mathematical programming (90C90) Nonconvex programming, global optimization (90C26) Fractional programming (90C32) Methods of successive quadratic programming type (90C55)
Related Items (6)
Minimization of $L_1$ Over $L_2$ for Sparse Signal Recovery with Convergence Guarantee ⋮ Study on \(L_1\) over \(L_2\) Minimization for nonnegative signal recovery ⋮ Sorted \(L_1/L_2\) minimization for sparse signal recovery ⋮ Retraction-based first-order feasible methods for difference-of-convex programs with smooth inequality and simple geometric constraints ⋮ Analysis of the ratio of \(\ell_1\) and \(\ell_2\) norms in compressed sensing ⋮ Extrapolated Proximal Subgradient Algorithms for Nonconvex and Nonsmooth Fractional Programs
Uses Software
Cites Work
- Unnamed Item
- A dual method for minimizing a nonsmooth objective over one smooth inequality constraint
- Theory of compressive sensing via \(\ell_1\)-minimization: a non-RIP analysis and extensions
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- Computing sparse representation in a highly coherent dictionary based on difference of \(L_1\) and \(L_2\)
- Convex analysis and nonlinear optimization. Theory and examples.
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- Error bounds for systems of lower semicontinuous functions in Asplund spaces
- Revisiting Dinkelbach-type algorithms for generalized fractional programs
- A proximal difference-of-convex algorithm with extrapolation
- Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- The multiproximal linearization method for convex composite problems
- Ratio and difference of \(l_1\) and \(l_2\) norms and sparse representation with coherent dictionaries
- A refined convergence analysis of \(\mathrm{pDCA}_{e}\) with applications to simultaneous sparse recovery and outlier detection
- Techniques of variational analysis
- Atomic Decomposition by Basis Pursuit
- Majorization-Minimization Procedures and Convergence of SQP Methods for Semi-Algebraic and Tame Programs
- A Moving Balls Approximation Method for a Class of Smooth Constrained Minimization Problems
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Decoding by Linear Programming
- Probing the Pareto Frontier for Basis Pursuit Solutions
- Variational Analysis
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Sparse Approximate Solutions to Linear Systems
- Convergence Rate Analysis of a Sequential Convex Programming Method with Line Search for a Class of Constrained Difference-of-Convex Optimization Problems
- Accelerated Schemes for the $L_1/L_2$ Minimization
- A Scale-Invariant Approach for Sparse Signal Recovery
- Proximal-gradient algorithms for fractional programming
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Stable signal recovery from incomplete and inaccurate measurements
- On Nonlinear Fractional Programming
- Convex Analysis
- Penalty Methods for a Class of Non-Lipschitz Optimization Problems
- Limited-Angle CT Reconstruction via the $L_1/L_2$ Minimization
- Convex analysis and global optimization
This page was built for publication: Analysis and Algorithms for Some Compressed Sensing Models Based on L1/L2 Minimization