scientific article; zbMATH DE number 6982301
From MaRDI portal
Publication:4558147
zbMath1444.62091arXiv1701.05128MaRDI QIDQ4558147
Xiliang Lu, Jian Huang, Yan Yan Liu, Yu Ling Jiao
Publication date: 21 November 2018
Full work available at URL: https://arxiv.org/abs/1701.05128
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
oracle propertygeometrical convergenceroot findingsupport detectionKarush-Kuhn-Tucker (KKT) conditionsnonasymptotic error bounds
Related Items
Fitting sparse linear models under the sufficient and necessary condition for model identification, \(\ell_0\)-regularized high-dimensional accelerated failure time model, GSDAR: a fast Newton algorithm for \(\ell_0\) regularized generalized linear models with statistical guarantee, L0-Regularized Learning for High-Dimensional Additive Hazards Regression, A data-driven line search rule for support recovery in high-dimensional data analysis, The springback penalty for robust signal recovery, Subset selection in network-linked data, Newton method for \(\ell_0\)-regularized optimization, Zero-norm regularized problems: equivalent surrogates, proximal MM method and statistical error bound, A communication-efficient method for ℓ0 regularization linear regression models, Sparse signal reconstruction via the approximations of \(\ell_0\) quasinorm, L 0 -regularization for high-dimensional regression with corrupted data, Communication-efficient estimation for distributed subset selection, Sparse quantile regression, Sparse regularization with the ℓ0 norm, Best subset selection with shrinkage: sparse additive hazards regression with the grouping effect, Sparse HP filter: finding kinks in the COVID-19 contact rate, Truncated $L^1$ Regularized Linear Regression: Theory and Algorithm, An extended Newton-type algorithm for \(\ell_2\)-regularized sparse logistic regression and its efficiency for classifying large-scale datasets, A unified primal dual active set algorithm for nonconvex sparse recovery, Smoothing Newton method for \(\ell^0\)-\(\ell^2\) regularized linear inverse problem, High-performance statistical computing in the computing environments of the 2020s, High-dimensional linear regression with hard thresholding regularization: theory and algorithm
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- Best subset selection via a modern optimization lens
- Gradient methods for minimizing composite functions
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Iterative hard thresholding for compressed sensing
- Iterative thresholding for sparse approximations
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- A primal dual active set with continuation algorithm for the \(\ell^0\)-regularized optimization problem
- Rejoinder: One-step sparse estimates in nonconcave penalized likelihood models
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Nonconcave penalized likelihood with a diverging number of parameters.
- Least angle regression. (With discussion)
- Thresholding-based iterative selection procedures for model selection and shrinkage
- Asymptotic properties of bridge estimators in sparse high-dimensional regression models
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- Complexity of unconstrained \(L_2 - L_p\) minimization
- Calibrating nonconvex penalized regression in ultra-high dimension
- Pathwise coordinate optimization
- Coordinate descent algorithms for lasso penalized regression
- High-dimensional graphs and variable selection with the Lasso
- Strong oracle optimality of folded concave penalized estimation
- Variable selection using MM algorithms
- Atomic Decomposition by Basis Pursuit
- A Proximal-Gradient Homotopy Method for the Sparse Least-Squares Problem
- SparseNet: Coordinate Descent With Nonconvex Penalties
- Decoding by Linear Programming
- Stable recovery of sparse overcomplete representations in the presence of noise
- Greed is Good: Algorithmic Results for Sparse Approximation
- Fast Solution of $\ell _{1}$-Norm Minimization Problems When the Solution May Be Sparse
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- A new approach to variable selection in least squares problems
- Recovering Sparse Signals With a Certain Family of Nonconvex Penalties and DC Programming
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- Sparse Approximate Solutions to Linear Systems
- Matching pursuits with time-frequency dictionaries
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Sparse Recovery With Orthogonal Matching Pursuit Under RIP
- Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise
- Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations
- The Computational Complexity of the Restricted Isometry Property, the Nullspace Property, and Related Concepts in Compressed Sensing
- Smoothly Clipped Absolute Deviation on High Dimensions
- Stable signal recovery from incomplete and inaccurate measurements
- Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
- A general theory of concave regularization for high-dimensional sparse estimation problems