GENs: generative encoding networks

From MaRDI portal
Publication:6097093

DOI10.1007/S10994-022-06220-WarXiv2010.15283OpenAlexW3096515130MaRDI QIDQ6097093

Ross T. Whitaker, Shireen Elhabian, Surojit Saha

Publication date: 12 June 2023

Published in: Machine Learning (Search for Journal in Brave)

Abstract: Mapping data from and/or onto a known family of distributions has become an important topic in machine learning and data analysis. Deep generative models (e.g., generative adversarial networks ) have been used effectively to match known and unknown distributions. Nonetheless, when the form of the target distribution is known, analytical methods are advantageous in providing robust results with provable properties. In this paper, we propose and analyze the use of nonparametric density methods to estimate the Jensen-Shannon divergence for matching unknown data distributions to known target distributions, such Gaussian or mixtures of Gaussians, in latent spaces. This analytical method has several advantages: better behavior when training sample quantity is low, provable convergence properties, and relatively few parameters, which can be derived analytically. Using the proposed method, we enforce the latent representation of an autoencoder to match a target distribution in a learning framework that we call a {em generative encoding network}. Here, we present the numerical methods; derive the expected distribution of the data in the latent space; evaluate the properties of the latent space, sample reconstruction, and generated samples; show the advantages over the adversarial counterpart; and demonstrate the application of the method in real world.


Full work available at URL: https://arxiv.org/abs/2010.15283





Cites Work


Related Items (4)






This page was built for publication: GENs: generative encoding networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6097093)