On Low-rank Trace Regression under General Sampling Distribution
From MaRDI portal
Publication:6317401
arXiv1904.08576MaRDI QIDQ6317401
Author name not available (Why is that?)
Publication date: 17 April 2019
Abstract: In this paper, we study the trace regression when a matrix of parameters B* is estimated via the convex relaxation of a rank-regularized regression or via regularized non-convex optimization. It is known that these estimators satisfy near-optimal error bounds under assumptions on the rank, coherence, and spikiness of B*. We start by introducing a general notion of spikiness for B* that provides a generic recipe to prove the restricted strong convexity of the sampling operator of the trace regression and obtain near-optimal and non-asymptotic error bounds for the estimation error. Similar to the existing literature, these results require the regularization parameter to be above a certain theory-inspired threshold that depends on observation noise that may be unknown in practice. Next, we extend the error bounds to cases where the regularization parameter is chosen via cross-validation. This result is significant in that existing theoretical results on cross-validated estimators (Kale et al., 2011; Kumar et al., 2013; Abou-Moustafa and Szepesvari, 2017) do not apply to our setting since the estimators we study are not known to satisfy their required notion of stability. Finally, using simulations on synthetic and real data, we show that the cross-validated estimator selects a near-optimal penalty parameter and outperforms the theory-inspired approach of selecting the parameter.
Has companion code repository: https://github.com/mohsenbayati/cv-impute
This page was built for publication: On Low-rank Trace Regression under General Sampling Distribution
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6317401)