Sparse Learning for Large-Scale and High-Dimensional Data: A Randomized Convex-Concave Optimization Approach
From MaRDI portal
Publication:2830269
DOI10.1007/978-3-319-46379-7_6zbMath1478.68305arXiv1511.03766OpenAlexW2249065054MaRDI QIDQ2830269
Zhi-Hua Zhou, Tianbao Yang, Rong Jin, Li-jun Zhang
Publication date: 9 November 2016
Published in: Lecture Notes in Computer Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1511.03766
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Nonconvex programming, global optimization (90C26) Learning and adaptive systems in artificial intelligence (68T05) Randomized algorithms (68W20)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Smooth minimization of non-smooth functions
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- User-friendly tail bounds for sums of random matrices
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- Kernels as features: on kernels, margins, and low-dimensional mappings
- The restricted isometry property and its implications for compressed sensing
- Uniform uncertainty principle for Bernoulli and subgaussian ensembles
- Database-friendly random projections: Johnson-Lindenstrauss with binary coins.
- A first-order primal-dual algorithm for convex problems with applications to imaging
- An algorithmic theory of learning: Robust concepts and random projection
- One-Bit Compressed Sensing by Linear Programming
- High-dimensional subset recovery in noise: Sparsified measurements without loss of statistical efficiency
- Optimization with Sparsity-Inducing Penalties
- Random Projections for Classification: A Recovery Approach
- Robust 1-bit Compressed Sensing and Sparse Logistic Regression: A Convex Programming Approach
- Randomized Algorithms for Matrices and Data
- An Accelerated HPE-Type Algorithm for a Class of Composite Convex-Concave Saddle-Point Problems
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- An elementary proof of a theorem of Johnson and Lindenstrauss
- Regularization and Variable Selection Via the Elastic Net
- Prediction, Learning, and Games
This page was built for publication: Sparse Learning for Large-Scale and High-Dimensional Data: A Randomized Convex-Concave Optimization Approach