Deep W-Networks: Solving Multi-Objective Optimisation Problems With Deep Reinforcement Learning

From MaRDI portal
Publication:6416662

arXiv2211.04813MaRDI QIDQ6416662

Author name not available (Why is that?)

Publication date: 9 November 2022

Abstract: In this paper, we build on advances introduced by the Deep Q-Networks (DQN) approach to extend the multi-objective tabular Reinforcement Learning (RL) algorithm W-learning to large state spaces. W-learning algorithm can naturally solve the competition between multiple single policies in multi-objective environments. However, the tabular version does not scale well to environments with large state spaces. To address this issue, we replace underlying Q-tables with DQN, and propose an addition of W-Networks, as a replacement for tabular weights (W) representations. We evaluate the resulting Deep W-Networks (DWN) approach in two widely-accepted multi-objective RL benchmarks: deep sea treasure and multi-objective mountain car. We show that DWN solves the competition between multiple policies while outperforming the baseline in the form of a DQN solution. Additionally, we demonstrate that the proposed algorithm can find the Pareto front in both tested environments.




Has companion code repository: https://github.com/deepwlearning/deepwnetworks








This page was built for publication: Deep W-Networks: Solving Multi-Objective Optimisation Problems With Deep Reinforcement Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6416662)