Tighter Information-Theoretic Generalization Bounds from Supersamples
From MaRDI portal
Publication:6425482
arXiv2302.02432MaRDI QIDQ6425482
Author name not available (Why is that?)
Publication date: 5 February 2023
Abstract: In this work, we present a variety of novel information-theoretic generalization bounds for learning algorithms, from the supersample setting of Steinke & Zakynthinou (2020)-the setting of the "conditional mutual information" framework. Our development exploits projecting the loss pair (obtained from a training instance and a testing instance) down to a single number and correlating loss values with a Rademacher sequence (and its shifted variants). The presented bounds include square-root bounds, fast-rate bounds, including those based on variance and sharpness, and bounds for interpolating algorithms etc. We show theoretically or empirically that these bounds are tighter than all information-theoretic bounds known to date on the same supersample setting.
Has companion code repository: https://github.com/ZiqiaoWangGeothe/ld-single-CMI
This page was built for publication: Tighter Information-Theoretic Generalization Bounds from Supersamples
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6425482)