On Balancing Bias and Variance in Unsupervised Multi-Source-Free Domain Adaptation

From MaRDI portal
Publication:6389902

arXiv2202.00796MaRDI QIDQ6389902

Author name not available (Why is that?)

Publication date: 1 February 2022

Abstract: Due to privacy, storage, and other constraints, there is a growing need for unsupervised domain adaptation techniques in machine learning that do not require access to the data used to train a collection of source models. Existing methods for multi-source-free domain adaptation (MSFDA) typically train a target model using pseudo-labeled data produced by the source models, which focus on improving the pseudo-labeling techniques or proposing new training objectives. Instead, we aim to analyze the fundamental limits of MSFDA. In particular, we develop an information-theoretic bound on the generalization error of the resulting target model, which illustrates an inherent bias-variance trade-off. We then provide insights on how to balance this trade-off from three perspectives, including domain aggregation, selective pseudo-labeling, and joint feature alignment, which leads to the design of novel algorithms. Experiments on multiple datasets validate our theoretical analysis and demonstrate the state-of-art performance of the proposed algorithm, especially on some of the most challenging datasets, including Office-Home and DomainNet.




Has companion code repository: https://github.com/maohaos2/MSFDA








This page was built for publication: On Balancing Bias and Variance in Unsupervised Multi-Source-Free Domain Adaptation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6389902)