Generalized high-dimensional trace regression via nuclear norm regularization
From MaRDI portal
Publication:2323374
DOI10.1016/j.jeconom.2019.04.026zbMath1452.62536arXiv1710.08083OpenAlexW2963869415MaRDI QIDQ2323374
Ziwei Zhu, Wenyan Gong, Jianqing Fan
Publication date: 2 September 2019
Published in: Journal of Econometrics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1710.08083
matrix completionlogistic regressionhigh-dimensional statisticsnuclear norm regularizationtrace regressionrestricted strong convexity
Applications of statistics to economics (62P20) Estimation in multivariate analysis (62H12) Ridge regression; shrinkage estimators (Lasso) (62J07) Generalized linear models (logistic models) (62J12)
Related Items
An optimal statistical and computational framework for generalized tensor estimation, Inference for low-rank tensors -- no need to debias, Maximum likelihood estimation and inference for high dimensional generalized factor models with application to factor-augmented regressions, Matrix completion under complex survey sampling, Oracle inequality for sparse trace regression models with exponential \(\beta\)-mixing errors, The rate of convergence for sparse and low-rank quantile trace regression, Robust Recommendation via Social Network Enhanced Matrix Completion, Large-scale minimum variance portfolio allocation using double regularization, LOCUS: a regularized blind source separation method with low-rank structure for investigating brain connectivity, Profile GMM estimation of panel data models with interactive fixed effects, High-dimensional VARs with common factors, Expectile trace regression via low-rank and group sparsity regularization, Generalized Factor Model for Ultra-High Dimensional Correlated Variables with Mixed Types, Parallel integrative learning for large-scale multi-response regression with incomplete outcomes, Double fused Lasso regularized regression with both matrix and vector valued predictors, Quantile trace regression via nuclear norm regularization, Bayesian Regression With Undirected Network Predictors With an Application to Brain Connectome Data, High dimensional generalized linear models for temporal dependent data, Factor Extraction in Dynamic Factor Models: Kalman Filter Versus Principal Components, ISLET: Fast and Optimal Low-Rank Tensor Regression via Importance Sketching
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Reduced rank regression via adaptive nuclear norm penalization
- Nearly unbiased variable selection under minimax concave penalty
- Generalized reduced rank tests using the singular value decomposition
- Oracle inequalities for high dimensional vector autoregressions
- Hybrid generalized empirical likelihood estimators: instrument selection with adaptive lasso
- Estimation of (near) low-rank matrices with noise and high-dimensional scaling
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- LASSO estimation of threshold autoregressive models
- One-step sparse estimates in nonconcave penalized likelihood models
- Bayesian reduced rank regression in econometrics
- Reduced-rank regression for the multivariate linear model
- Multivariate reduced-rank regression
- Chernoff-type bound for finite Markov chains
- Estimation of partially nonstationary vector autoregressive models with seasonal behavior
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
- Forecasting stock market movement direction with support vector machine
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Instrumental variables estimation with many weak instruments using regularized JIVE
- Concentration inequalities for Markov chains by Marton couplings and spectral methods
- Atomic Decomposition by Basis Pursuit
- One-Bit Compressed Sensing by Linear Programming
- Sparse Models and Methods for Optimal Instruments With an Application to Eminent Domain
- A Max-Norm Constrained Minimization Approach to 1-Bit Matrix Completion
- Robust 1-bit Compressed Sensing and Sparse Logistic Regression: A Convex Programming Approach
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Forecasting Using Principal Components From a Large Number of Predictors
- Program Evaluation and Causal Inference With High-Dimensional Data
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- 1-Bit matrix completion
- Nonconcave Penalized Likelihood With NP-Dimensionality
- Restricted strong convexity and weighted matrix completion: Optimal bounds with noise
- Estimating Linear Restrictions on Regression Coefficients for Multivariate Normal Distributions
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers