Soft Actor-Critic Learning-Based Joint Computing, Pushing, and Caching Framework in MEC Networks
From MaRDI portal
Publication:6437304
arXiv2305.12099MaRDI QIDQ6437304
Author name not available (Why is that?)
Publication date: 20 May 2023
Abstract: To support future 6G mobile applications, the mobile edge computing (MEC) network needs to be jointly optimized for computing, pushing, and caching to reduce transmission load and computation cost. To achieve this, we propose a framework based on deep reinforcement learning that enables the dynamic orchestration of these three activities for the MEC network. The framework can implicitly predict user future requests using deep networks and push or cache the appropriate content to enhance performance. To address the curse of dimensionality resulting from considering three activities collectively, we adopt the soft actor-critic reinforcement learning in continuous space and design the action quantization and correction specifically to fit the discrete optimization problem. We conduct simulations in a single-user single-server MEC network setting and demonstrate that the proposed framework effectively decreases both transmission load and computing cost under various configurations of cache size and tolerable service delay.
Has companion code repository: https://github.com/xiangyu-gao/sac_joint_compute_push_cache
This page was built for publication: Soft Actor-Critic Learning-Based Joint Computing, Pushing, and Caching Framework in MEC Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6437304)