DSelect-k: Differentiable Selection in the Mixture of Experts with Applications to Multi-Task Learning

From MaRDI portal
Publication:6369673

arXiv2106.03760MaRDI QIDQ6369673

Zhe Zhao, Hussein Hazimeh, Rahul Mazumder, Maheswaran Sathiamoorthy, Aakanksha Chowdhery, Ed H. Chi, Lichan Hong, Yihua Chen

Publication date: 7 June 2021

Abstract: The Mixture-of-Experts (MoE) architecture is showing promising results in improving parameter sharing in multi-task learning (MTL) and in scaling high-capacity neural networks. State-of-the-art MoE models use a trainable sparse gate to select a subset of the experts for each input example. While conceptually appealing, existing sparse gates, such as Top-k, are not smooth. The lack of smoothness can lead to convergence and statistical performance issues when training with gradient-based methods. In this paper, we develop DSelect-k: a continuously differentiable and sparse gate for MoE, based on a novel binary encoding formulation. The gate can be trained using first-order methods, such as stochastic gradient descent, and offers explicit control over the number of experts to select. We demonstrate the effectiveness of DSelect-k on both synthetic and real MTL datasets with up to 128 tasks. Our experiments indicate that DSelect-k can achieve statistically significant improvements in prediction and expert selection over popular MoE gates. Notably, on a real-world, large-scale recommender system, DSelect-k achieves over 22% improvement in predictive performance compared to Top-k. We provide an open-source implementation of DSelect-k.




Has companion code repository: https://github.com/google-research/google-research/tree/master/dselect_k_moe








This page was built for publication: DSelect-k: Differentiable Selection in the Mixture of Experts with Applications to Multi-Task Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6369673)