Meta-Learning to Communicate: Fast End-to-End Training for Fading Channels
From MaRDI portal
Publication:6327678
arXiv1910.09945MaRDI QIDQ6327678
Author name not available (Why is that?)
Publication date: 22 October 2019
Abstract: When a channel model is available, learning how to communicate on fading noisy channels can be formulated as the (unsupervised) training of an autoencoder consisting of the cascade of encoder, channel, and decoder. An important limitation of the approach is that training should be generally carried out from scratch for each new channel. To cope with this problem, prior works considered joint training over multiple channels with the aim of finding a single pair of encoder and decoder that works well on a class of channels. As a result, joint training ideally mimics the operation of non-coherent transmission schemes. In this paper, we propose to obviate the limitations of joint training via meta-learning: Rather than training a common model for all channels, meta-learning finds a common initialization vector that enables fast training on any channel. The approach is validated via numerical results, demonstrating significant training speed-ups, with effective encoders and decoders obtained with as little as one iteration of Stochastic Gradient Descent.
Has companion code repository: https://github.com/kclip/meta-autoencoder
This page was built for publication: Meta-Learning to Communicate: Fast End-to-End Training for Fading Channels
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6327678)