Optimal Representations for Covariate Shift
From MaRDI portal
Publication:6387236
arXiv2201.00057MaRDI QIDQ6387236
Author name not available (Why is that?)
Publication date: 31 December 2021
Abstract: Machine learning systems often experience a distribution shift between training and testing. In this paper, we introduce a simple variational objective whose optima are exactly the set of all representations on which risk minimizers are guaranteed to be robust to any distribution shift that preserves the Bayes predictor, e.g., covariate shifts. Our objective has two components. First, a representation must remain discriminative for the task, i.e., some predictor must be able to simultaneously minimize the source and target risk. Second, the representation's marginal support needs to be the same across source and target. We make this practical by designing self-supervised objectives that only use unlabelled data and augmentations to train robust representations. Our objectives give insights into the robustness of CLIP, and further improve CLIP's representations to achieve SOTA results on DomainBed.
Has companion code repository: https://github.com/facebookresearch/DomainBed
This page was built for publication: Optimal Representations for Covariate Shift
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6387236)