Editorial: Statistical learning methods including dimensionality reduction
DOI10.1016/j.csda.2007.04.012zbMath1452.00018OpenAlexW2059667279MaRDI QIDQ1020825
No author found.
Publication date: 2 June 2009
Published in: Computational Statistics and Data Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.csda.2007.04.012
Computational methods for problems pertaining to statistics (62-08) Ridge regression; shrinkage estimators (Lasso) (62J07) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Proceedings, conferences, collections, etc. pertaining to statistics (62-06) Collections of articles of miscellaneous specific interest (00B15)
Related Items (3)
Uses Software
Cites Work
- Unnamed Item
- Projected gradient approach to the numerical solution of the SCoTLASS
- An even faster algorithm for ridge regression of reduced rank data
- A mixture model for the classification of three-way proximity data
- Multivariable regression model building by using fractional polynomials: description of SAS, STATA and R programs
- Unbiased variable selection for classification trees with multivariate responses
- Variable selection in kernel Fisher discriminant analysis by means of recursive feature elimina\-tion
- Parsimonious additive models
- DALASS: variable selection in discriminant analysis via the LASSO
- Kernel logistic PLS: a tool for supervised nonlinear dimensionality reduction and binary classifi\-cation
- Boosting ridge regression
- Robust variable selection using least angle regression and elemental set sampling
- Relaxed Lasso
- Regularized linear and kernel redundancy analysis
- Input selection and shrinkage in multiresponse linear regression
- Improving implementation of linear discriminant analysis for the high dimension/small sample size problem
- Class prediction and gene selection for DNA microarrays using regularized sliced inverse regression
- High-dimensional pseudo-logistic regression and classification with applications to gene expression data
- Unbiased split selection for classification trees based on the Gini index
- High-dimensional data clustering
- Model-based methods to identify multiple cluster structures in a data set
- A unifying model involving a categorical and/or dimensional reduction for multimode data
- Dimension reduction via principal variables
- Non-symmetric correspondence analysis with ordinal variables using orthogonal polynomials
- A simple and efficient method for variable ranking according to their usefulness for learning
- Combined use of association rules mining and clustering methods to find relevant links between binary rare attributes in a large data set
- Model-Based Gaussian and Non-Gaussian Clustering
This page was built for publication: Editorial: Statistical learning methods including dimensionality reduction