Learning Discrete Structured Representations by Adversarially Maximizing Mutual Information
From MaRDI portal
Publication:6338353
arXiv2004.03991MaRDI QIDQ6338353
Author name not available (Why is that?)
Publication date: 8 April 2020
Abstract: We propose learning discrete structured representations from unlabeled data by maximizing the mutual information between a structured latent variable and a target variable. Calculating mutual information is intractable in this setting. Our key technical contribution is an adversarial objective that can be used to tractably estimate mutual information assuming only the feasibility of cross entropy calculation. We develop a concrete realization of this general formulation with Markov distributions over binary encodings. We report critical and unexpected findings on practical aspects of the objective such as the choice of variational priors. We apply our model on document hashing and show that it outperforms current best baselines based on discrete and vector quantized variational autoencoders. It also yields highly compressed interpretable representations.
Has companion code repository: https://github.com/karlstratos/ammi
This page was built for publication: Learning Discrete Structured Representations by Adversarially Maximizing Mutual Information
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6338353)