Weighted lp − l1 minimization methods for block sparse recovery and rank minimization
From MaRDI portal
Publication:5856318
DOI10.1142/S0219530520500086zbMath1461.90092OpenAlexW3036569189MaRDI QIDQ5856318
Publication date: 25 March 2021
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1142/s0219530520500086
Convex programming (90C25) Signal theory (characterization, reconstruction, filtering, etc.) (94A12) Matrix completion problems (15A83)
Uses Software
Cites Work
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Block sparse recovery via mixed \(l_2/l_1\) minimization
- Convergence of fixed-point continuation algorithms for matrix rank minimization
- Fixed point and Bregman iterative methods for matrix rank minimization
- Improved stability conditions of BOGA for noisy block-sparse signals
- Iterative hard thresholding for compressed sensing
- The restricted isometry property and its implications for compressed sensing
- CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
- Convergence of projected Landweber iteration for matrix rank minimization
- Sharp sufficient conditions for stable recovery of block sparse signals by block orthogonal matching pursuit
- Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery
- A new bound on the block restricted isometry constant in compressed sensing
- Convergence analysis of projected gradient descent for Schatten-\(p\) nonconvex matrix recovery
- Improved Iteratively Reweighted Least Squares for Unconstrained Smoothed $\ell_q$ Minimization
- A Singular Value Thresholding Algorithm for Matrix Completion
- Truncated $l_{1-2}$ Models for Sparse Recovery and Rank Minimization
- Low-rank Matrix Recovery via Iteratively Reweighted Least Squares Minimization
- Restricted isometry properties and nonconvex compressive sensing
- Decoding by Linear Programming
- Interior-Point Method for Nuclear Norm Approximation with Application to System Identification
- Reduce and Boost: Recovering Arbitrary Sets of Jointly Sparse Vectors
- High-Resolution Radar via Compressed Sensing
- Block-Sparse Signals: Uncertainty Relations and Efficient Recovery
- Blind Multiband Signal Reconstruction: Compressed Sensing for Analog Signals
- Adaptive Compressed Sensing Radar Oriented Toward Cognitive Detection in Dynamic Sparse Target Scene
- Iterative Reweighted <formula formulatype="inline"><tex Notation="TeX">$\ell_{2}/\ell_{1}$</tex> </formula> Recovery Algorithms for Compressed Sensing of Block Sparse Signals
- Robust Recovery of Signals From a Structured Union of Subspaces
- The High Order Block RIP Condition for Signal Recovery
- Minimization of $\ell_{1-2}$ for Compressed Sensing
- Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements
- The Power of Convex Relaxation: Near-Optimal Matrix Completion
- Restricted $p$-Isometry Properties of Nonconvex Matrix Recovery
This page was built for publication: Weighted lp − l1 minimization methods for block sparse recovery and rank minimization