Deprecated: $wgMWOAuthSharedUserIDs=false is deprecated, set $wgMWOAuthSharedUserIDs=true, $wgMWOAuthSharedUserSource='local' instead [Called from MediaWiki\HookContainer\HookContainer::run in /var/www/html/w/includes/HookContainer/HookContainer.php at line 135] in /var/www/html/w/includes/Debug/MWDebug.php on line 372
Thompson Sampling Algorithms for Mean-Variance Bandits - MaRDI portal

Thompson Sampling Algorithms for Mean-Variance Bandits

From MaRDI portal
Publication:6333935

arXiv2002.00232MaRDI QIDQ6333935

Author name not available (Why is that?)

Publication date: 1 February 2020

Abstract: The multi-armed bandit (MAB) problem is a classical learning task that exemplifies the exploration-exploitation tradeoff. However, standard formulations do not take into account {em risk}. In online decision making systems, risk is a primary concern. In this regard, the mean-variance risk measure is one of the most common objective functions. Existing algorithms for mean-variance optimization in the context of MAB problems have unrealistic assumptions on the reward distributions. We develop Thompson Sampling-style algorithms for mean-variance MAB and provide comprehensive regret analyses for Gaussian and Bernoulli bandits with fewer assumptions. Our algorithms achieve the best known regret bounds for mean-variance MABs and also attain the information-theoretic bounds in some parameter regimes. Empirical simulations show that our algorithms significantly outperform existing LCB-based algorithms for all risk tolerances.




Has companion code repository: https://github.com/ksetdekov/trip_choice_optimizer








This page was built for publication: Thompson Sampling Algorithms for Mean-Variance Bandits

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6333935)