DTR Bandit: Learning to Make Response-Adaptive Decisions With Low Regret
From MaRDI portal
Publication:6340156
arXiv2005.02791MaRDI QIDQ6340156
Nathan Kallus, Yichun Hu
Publication date: 6 May 2020
Abstract: Dynamic treatment regimes (DTRs) are personalized, adaptive, multi-stage treatment plans that adapt treatment decisions both to an individual's initial features and to intermediate outcomes and features at each subsequent stage, which are affected by decisions in prior stages. Examples include personalized first- and second-line treatments of chronic conditions like diabetes, cancer, and depression, which adapt to patient response to first-line treatment, disease progression, and individual characteristics. While existing literature mostly focuses on estimating the optimal DTR from offline data such as from sequentially randomized trials, we study the problem of developing the optimal DTR in an online manner, where the interaction with each individual affect both our cumulative reward and our data collection for future learning. We term this the DTR bandit problem. We propose a novel algorithm that, by carefully balancing exploration and exploitation, is guaranteed to achieve rate-optimal regret when the transition and reward models are linear. We demonstrate our algorithm and its benefits both in synthetic experiments and in a case study of adaptive treatment of major depressive disorder using real-world data.
Has companion code repository: https://github.com/CausalML/DTR-Bandit
This page was built for publication: DTR Bandit: Learning to Make Response-Adaptive Decisions With Low Regret
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6340156)