Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and Generalization
From MaRDI portal
Publication:6381479
arXiv2110.14622MaRDI QIDQ6381479
Author name not available (Why is that?)
Publication date: 27 October 2021
Abstract: Despite the significant interests and many progresses in decentralized multi-player multi-armed bandits (MP-MAB) problems in recent years, the regret gap to the natural centralized lower bound in the heterogeneous MP-MAB setting remains open. In this paper, we propose BEACON -- Batched Exploration with Adaptive COmmunicatioN -- that closes this gap. BEACON accomplishes this goal with novel contributions in implicit communication and efficient exploration. For the former, we propose a novel adaptive differential communication (ADC) design that significantly improves the implicit communication efficiency. For the latter, a carefully crafted batched exploration scheme is developed to enable incorporation of the combinatorial upper confidence bound (CUCB) principle. We then generalize the existing linear-reward MP-MAB problems, where the system reward is always the sum of individually collected rewards, to a new MP-MAB problem where the system reward is a general (nonlinear) function of individual rewards. We extend BEACON to solve this problem and prove a logarithmic regret. BEACON bridges the algorithm design and regret analysis of combinatorial MAB (CMAB) and MP-MAB, two largely disjointed areas in MAB, and the results in this paper suggest that this previously ignored connection is worth further investigation.
Has companion code repository: https://github.com/shengroup/mpmab_beacon
This page was built for publication: Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and Generalization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6381479)