Deep Reinforcement Learning Autoencoder with Noisy Feedback

From MaRDI portal
Publication:6308108

arXiv1810.05419MaRDI QIDQ6308108

Author name not available (Why is that?)

Publication date: 12 October 2018

Abstract: End-to-end learning of communication systems enables joint optimization of transmitter and receiver, implemented as deep neural network-based autoencoders, over any type of channel and for an arbitrary performance metric. Recently, an alternating training procedure was proposed which eliminates the need for an explicit channel model. However, this approach requires feedback of real-valued losses from the receiver to the transmitter during training. In this paper, we first show that alternating training works even with a noisy feedback channel. Then, we design a system that learns to transmit real numbers over an unknown channel without a preexisting feedback link. Once trained, this feedback system can be used to communicate losses during alternating training of autoencoders. Evaluations over additive white Gaussian noise and Rayleigh block-fading channels show that end-to-end communication systems trained using the proposed feedback system achieve the same performance as when trained with a perfect feedback link.




Has companion code repository: https://github.com/AravindGanesh/ML_WirelessComm








This page was built for publication: Deep Reinforcement Learning Autoencoder with Noisy Feedback

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6308108)