Solving reward-collecting problems with UAVs: a comparison of online optimization and Q-learning

From MaRDI portal
Publication:6384455

arXiv2112.00141MaRDI QIDQ6384455

Author name not available (Why is that?)

Publication date: 30 November 2021

Abstract: Uncrewed autonomous vehicles (UAVs) have made significant contributions to reconnaissance and surveillance missions in past US military campaigns. As the prevalence of UAVs increases, there has also been improvements in counter-UAV technology that makes it difficult for them to successfully obtain valuable intelligence within an area of interest. Hence, it has become important that modern UAVs can accomplish their missions while maximizing their chances of survival. In this work, we specifically study the problem of identifying a short path from a designated start to a goal, while collecting all rewards and avoiding adversaries that move randomly on the grid. We also provide a possible application of the framework in a military setting, that of autonomous casualty evacuation. We present a comparison of three methods to solve this problem: namely we implement a Deep Q-Learning model, an varepsilon-greedy tabular Q-Learning model, and an online optimization framework. Our computational experiments, designed using simple grid-world environments with random adversaries showcase how these approaches work and compare them in terms of performance, accuracy, and computational time.




Has companion code repository: https://github.com/benliu31492/solving-reward-collecting-problems-with-uavs-a-comparison-of-online-optimization-and-q-learning








This page was built for publication: Solving reward-collecting problems with UAVs: a comparison of online optimization and Q-learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6384455)