Globally Convergent Newton Methods for Ill-conditioned Generalized Self-concordant Losses
From MaRDI portal
Publication:6321455
arXiv1907.01771MaRDI QIDQ6321455
Author name not available (Why is that?)
Publication date: 3 July 2019
Abstract: In this paper, we study large-scale convex optimization algorithms based on the Newton method applied to regularized generalized self-concordant losses, which include logistic regression and softmax regression. We first prove that our new simple scheme based on a sequence of problems with decreasing regularization parameters is provably globally convergent, that this convergence is linear with a constant factor which scales only logarithmically with the condition number. In the parametric setting, we obtain an algorithm with the same scaling than regular first-order methods but with an improved behavior, in particular in ill-conditioned problems. Second, in the non parametric machine learning setting, we provide an explicit algorithm combining the previous scheme with Nystr{"o}m projection techniques, and prove that it achieves optimal generalization bounds with a time complexity of order O(ndf ), a memory complexity of order O(df 2 ) and no dependence on the condition number, generalizing the results known for least-squares regression. Here n is the number of observations and df is the associated degrees of freedom. In particular, this is the first large-scale algorithm to solve logistic and softmax regressions in the non-parametric setting with large condition numbers and theoretical guarantees.
Has companion code repository: https://github.com/EigenPro/EigenPro
This page was built for publication: Globally Convergent Newton Methods for Ill-conditioned Generalized Self-concordant Losses
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6321455)