Variational image compression with a scale hyperprior

From MaRDI portal
Publication:6297336

arXiv1802.01436MaRDI QIDQ6297336

Author name not available (Why is that?)

Publication date: 31 January 2018

Abstract: We describe an end-to-end trainable model for image compression based on variational autoencoders. The model incorporates a hyperprior to effectively capture spatial dependencies in the latent representation. This hyperprior relates to side information, a concept universal to virtually all modern image codecs, but largely unexplored in image compression using artificial neural networks (ANNs). Unlike existing autoencoder compression methods, our model trains a complex prior jointly with the underlying autoencoder. We demonstrate that this model leads to state-of-the-art image compression when measuring visual quality using the popular MS-SSIM index, and yields rate-distortion performance surpassing published ANN-based methods when evaluated using a more traditional metric based on squared error (PSNR). Furthermore, we provide a qualitative comparison of models trained for different distortion metrics.




Has companion code repository: https://github.com/klieberman/ood_nic








This page was built for publication: Variational image compression with a scale hyperprior

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6297336)