Learning sparse gradients for variable selection and dimension reduction
From MaRDI portal
Publication:439003
DOI10.1007/s10994-012-5284-9zbMath1243.68258OpenAlexW2070991548MaRDI QIDQ439003
Publication date: 31 July 2012
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-012-5284-9
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items
Structure learning via unstructured kernel-based M-estimation ⋮ High-dimensional local linear regression under sparsity and convex losses ⋮ LEARNING GRADIENTS FROM NONIDENTICAL DATA ⋮ Refined Generalization Bounds of Gradient Learning over Reproducing Kernel Hilbert Spaces ⋮ Sparse dimension reduction for survival data ⋮ Performance analysis of the LapRSSLG algorithm in learning theory ⋮ Discovering model structure for partially linear models
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Learning gradients on manifolds
- Component selection and smoothing in multivariate nonparametric regression
- Learning and approximation by Gaussians on Riemannian manifolds
- Some properties of invariant sets of a flow
- Multivariate locally weighted least squares regression
- Structure adaptive approach for dimension reduction.
- Least angle regression. (With discussion)
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Weak convergence and empirical processes. With applications to statistics
- Selection of variables and dimension reduction in high-dimensional non-parametric regression
- Feature space perspectives for learning the kernel
- Kernel dimension reduction in regression
- A framelet-based image inpainting algorithm
- Rodeo: Sparse, greedy nonparametric regression
- Contour regression: a general approach to dimension reduction
- Exploring Regression Structure Using Nonparametric Functional Estimation
- Learning Theory
- Sliced Inverse Regression for Dimension Reduction
- On Principal Hessian Directions for Data Visualization and Dimension Reduction: Another Application of Stein's Lemma
- Theory & Methods: Special Invited Paper: Dimension Reduction and Visualization in Discriminant Analysis (with discussion)
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- An Adaptive Estimation of Dimension Reduction Space
- Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
- 10.1162/153244303322753616
- 10.1162/153244303322753661
- 10.1162/153244303322753751
- De-noising by soft-thresholding
- Regularization and Variable Selection Via the Elastic Net
- Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data
- Signal Recovery by Proximal Forward-Backward Splitting
- On Learning Vector-Valued Functions
- Theory of Reproducing Kernels
- Compressed sensing
- Minimal realizations of nonlinear systems
- Gene selection for cancer classification using support vector machines