Neural Distributed Source Coding
From MaRDI portal
Publication:6369508
arXiv2106.02797MaRDI QIDQ6369508
Author name not available (Why is that?)
Publication date: 5 June 2021
Abstract: Distributed source coding (DSC) is the task of encoding an input in the absence of correlated side information that is only available to the decoder. Remarkably, Slepian and Wolf showed in 1973 that an encoder without access to the side information can asymptotically achieve the same compression rate as when the side information is available to it. While there is vast prior work on this topic, practical DSC has been limited to synthetic datasets and specific correlation structures. Here we present a framework for lossy DSC that is agnostic to the correlation structure and can scale to high dimensions. Rather than relying on hand-crafted source-modeling, our method utilizes a conditional VQ-VAE to learn the distributed encoder and decoder. We evaluate our method on multiple datasets and show that our method can handle complex correlations -- significantly better than the current state-of-the-art method.
Has companion code repository: https://github.com/acnagle/neural-dsc
This page was built for publication: Neural Distributed Source Coding
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6369508)