Cyclically Equivariant Neural Decoders for Cyclic Codes
From MaRDI portal
Publication:6367470
arXiv2105.05540MaRDI QIDQ6367470
Author name not available (Why is that?)
Publication date: 12 May 2021
Abstract: Neural decoders were introduced as a generalization of the classic Belief Propagation (BP) decoding algorithms, where the Trellis graph in the BP algorithm is viewed as a neural network, and the weights in the Trellis graph are optimized by training the neural network. In this work, we propose a novel neural decoder for cyclic codes by exploiting their cyclically invariant property. More precisely, we impose a shift invariant structure on the weights of our neural decoder so that any cyclic shift of inputs results in the same cyclic shift of outputs. Extensive simulations with BCH codes and punctured Reed-Muller (RM) codes show that our new decoder consistently outperforms previous neural decoders when decoding cyclic codes. Finally, we propose a list decoding procedure that can significantly reduce the decoding error probability for BCH codes and punctured RM codes. For certain high-rate codes, the gap between our list decoder and the Maximum Likelihood decoder is less than dB. Code available at https://github.com/cyclicallyneuraldecoder/CyclicallyEquivariantNeuralDecoders
Has companion code repository: https://github.com/cyclicallyneuraldecoder/CyclicallyEquivariantNeuralDecoders
This page was built for publication: Cyclically Equivariant Neural Decoders for Cyclic Codes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6367470)