Graph Reinforcement Learning for Network Control via Bi-Level Optimization

From MaRDI portal
Publication:6436756

arXiv2305.09129MaRDI QIDQ6436756

Author name not available (Why is that?)

Publication date: 15 May 2023

Abstract: Optimization problems over dynamic networks have been extensively studied and widely used in the past decades to formulate numerous real-world problems. However, (1) traditional optimization-based approaches do not scale to large networks, and (2) the design of good heuristics or approximation algorithms often requires significant manual trial-and-error. In this work, we argue that data-driven strategies can automate this process and learn efficient algorithms without compromising optimality. To do so, we present network control problems through the lens of reinforcement learning and propose a graph network-based framework to handle a broad class of problems. Instead of naively computing actions over high-dimensional graph elements, e.g., edges, we propose a bi-level formulation where we (1) specify a desired next state via RL, and (2) solve a convex program to best achieve it, leading to drastically improved scalability and performance. We further highlight a collection of desirable features to system designers, investigate design decisions, and present experiments on real-world control problems showing the utility, scalability, and flexibility of our framework.




Has companion code repository: https://github.com/danielegammelli/graph-rl-for-network-optimization








This page was built for publication: Graph Reinforcement Learning for Network Control via Bi-Level Optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6436756)