Syntactically Informed Text Compression with Recurrent Neural Networks
From MaRDI portal
Publication:6276373
arXiv1608.02893MaRDI QIDQ6276373
Author name not available (Why is that?)
Publication date: 7 August 2016
Abstract: We present a self-contained system for constructing natural language models for use in text compression. Our system improves upon previous neural network based models by utilizing recent advances in syntactic parsing -- Google's SyntaxNet -- to augment character-level recurrent neural networks. RNNs have proven exceptional in modeling sequence data such as text, as their architecture allows for modeling of long-term contextual information.
Has companion code repository: https://github.com/davidcox143/rnn-text-compress
This page was built for publication: Syntactically Informed Text Compression with Recurrent Neural Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6276373)