Trading Information between Latents in Hierarchical Variational Autoencoders
From MaRDI portal
Publication:6425984
arXiv2302.04855MaRDI QIDQ6425984
Author name not available (Why is that?)
Publication date: 9 February 2023
Abstract: Variational Autoencoders (VAEs) were originally motivated (Kingma & Welling, 2014) as probabilistic generative models in which one performs approximate Bayesian inference. The proposal of -VAEs (Higgins et al., 2017) breaks this interpretation and generalizes VAEs to application domains beyond generative modeling (e.g., representation learning, clustering, or lossy data compression) by introducing an objective function that allows practitioners to trade off between the information content ("bit rate") of the latent representation and the distortion of reconstructed data (Alemi et al., 2018). In this paper, we reconsider this rate/distortion trade-off in the context of hierarchical VAEs, i.e., VAEs with more than one layer of latent variables. We identify a general class of inference models for which one can split the rate into contributions from each layer, which can then be tuned independently. We derive theoretical bounds on the performance of downstream tasks as functions of the individual layers' rates and verify our theoretical findings in large-scale experiments. Our results provide guidance for practitioners on which region in rate-space to target for a given application.
Has companion code repository: https://github.com/timxzz/hit
This page was built for publication: Trading Information between Latents in Hierarchical Variational Autoencoders
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6425984)