Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance

From MaRDI portal
Publication:6392104

arXiv2202.12387MaRDI QIDQ6392104

Author name not available (Why is that?)

Publication date: 24 February 2022

Abstract: In this paper, we study contrastive learning from an optimization perspective, aiming to analyze and address a fundamental issue of existing contrastive learning methods that either rely on a large batch size or a large dictionary of feature vectors. We consider a global objective for contrastive learning, which contrasts each positive pair with all negative pairs for an anchor point. From the optimization perspective, we explain why existing methods such as SimCLR require a large batch size in order to achieve a satisfactory result. In order to remove such requirement, we propose a memory-efficient Stochastic Optimization algorithm for solving the Global objective of Contrastive Learning of Representations, named SogCLR. We show that its optimization error is negligible under a reasonable condition after a sufficient number of iterations or is diminishing for a slightly different global contrastive objective. Empirically, we demonstrate that SogCLR with small batch size (e.g., 256) can achieve similar performance as SimCLR with large batch size (e.g., 8192) on self-supervised learning task on ImageNet-1K. We also attempt to show that the proposed optimization technique is generic and can be applied to solving other contrastive losses, e.g., two-way contrastive losses for bimodal contrastive learning. The proposed method is implemented in our open-sourced library LibAUC (www.libauc.org).




Has companion code repository: https://github.com/optimization-ai/sogclr








This page was built for publication: Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6392104)