Consistency bounds and support recovery of d-stationary solutions of sparse sample average approximations
From MaRDI portal
Publication:2022171
DOI10.1007/s10898-019-00857-zzbMath1465.90067OpenAlexW2983611585WikidataQ126801873 ScholiaQ126801873MaRDI QIDQ2022171
Publication date: 28 April 2021
Published in: Journal of Global Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10898-019-00857-z
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- DC approximation approaches for sparse optimization
- Point source super-resolution via non-convex \(L_1\) based methods
- Statistics for high-dimensional data. Methods, theory and applications.
- Support recovery without incoherence: a case for nonconvex regularization
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- Minimization of transformed \(L_1\) penalty: theory, difference of convex function algorithm, and robust application in compressed sensing
- The DC (Difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems
- Asymptotics for Lasso-type estimators.
- Simultaneous analysis of Lasso and Dantzig selector
- Structural properties of affine sparsity constraints
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Computing B-Stationary Points of Nonsmooth DC Programs
- Decoding by Linear Programming
- Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
- Lectures on Stochastic Programming
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Local Strong Homogeneity of a Regularized Estimator
- Decomposition Methods for Computing Directional Stationary Solutions of a Class of Nonsmooth Nonconvex Optimization Problems
- Minimization of $\ell_{1-2}$ for Compressed Sensing
- Nonconcave Penalized Likelihood With NP-Dimensionality
- Difference-of-Convex Learning: Directional Stationarity, Optimality, and Sparsity
- Confidence Intervals and Regions for the Lasso by Using Stochastic Variational Inequality Techniques in Optimization
- Bregman Iterative Algorithms for $\ell_1$-Minimization with Applications to Compressed Sensing
- Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
This page was built for publication: Consistency bounds and support recovery of d-stationary solutions of sparse sample average approximations