First-order Optimization for Superquantile-based Supervised Learning

From MaRDI portal
Publication:6350227

arXiv2009.14575MaRDI QIDQ6350227

Author name not available (Why is that?)

Publication date: 30 September 2020

Abstract: Classical supervised learning via empirical risk (or negative log-likelihood) minimization hinges upon the assumption that the testing distribution coincides with the training distribution. This assumption can be challenged in modern applications of machine learning in which learning machines may operate at prediction time with testing data whose distribution departs from the one of the training data. We revisit the superquantile regression method by proposing a first-order optimization algorithm to minimize a superquantile-based learning objective. The proposed algorithm is based on smoothing the superquantile function by infimal convolution. Promising numerical results illustrate the interest of the approach towards safer supervised learning.




Has companion code repository: https://github.com/yassine-laguel/spqr








This page was built for publication: First-order Optimization for Superquantile-based Supervised Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6350227)