Parallel integrative learning for large-scale multi-response regression with incomplete outcomes
From MaRDI portal
Publication:2242011
DOI10.1016/j.csda.2021.107243OpenAlexW3150901058MaRDI QIDQ2242011
Daoji Li, Zemin Zheng, Ruipeng Dong
Publication date: 9 November 2021
Published in: Computational Statistics and Data Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2104.05076
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Leveraging mixed and incomplete outcomes via reduced-rank modeling
- Nearly unbiased variable selection under minimax concave penalty
- A unified approach to model selection and sparse recovery using regularized least squares
- The Adaptive Lasso and Its Oracle Properties
- Optimal selection of reduced rank estimators of high-dimensional matrices
- Joint variable and rank selection for parsimonious estimation of high-dimensional matrices
- Reduced-rank regression for the multivariate linear model
- Estimation and hypothesis test for partial linear multiplicative models
- Least angle regression. (With discussion)
- Multivariate spatial autoregressive model for large scale social networks
- A note on rank reduction in sparse multivariate regression
- Generalized high-dimensional trace regression via nuclear norm regularization
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- Noisy low-rank matrix completion with general sampling distribution
- Pathwise coordinate optimization
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Coordinate descent algorithms for lasso penalized regression
- Estimation and inference in semiparametric quantile factor models
- Asymptotic Equivalence of Regularization Methods in Thresholded Parameter Space
- Non-asymptotic theory of random matrices: extreme singular values
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Sparse Reduced-Rank Regression for Simultaneous Dimension Reduction and Variable Selection
- Dimensionality Reduction and Variable Selection in Multivariate Varying-Coefficient Models With a Large Number of Covariates
- Nonsparse Learning with Latent Variables
- SOFAR: Large-Scale Association Network Learning
- A useful variant of the Davis–Kahan theorem for statisticians
- Reduced Rank Stochastic Regression with a Sparse Singular value Decomposition
- Tuning Parameter Selection in High Dimensional Penalized Likelihood
This page was built for publication: Parallel integrative learning for large-scale multi-response regression with incomplete outcomes