Minimising the Expected Posterior Entropy Yields Optimal Summary Statistics

From MaRDI portal
Publication:6401209

arXiv2206.02340MaRDI QIDQ6401209

Author name not available (Why is that?)

Publication date: 5 June 2022

Abstract: Extracting low-dimensional summary statistics from large datasets is essential for efficient (likelihood-free) inference. We propose obtaining summary statistics by minimizing the expected posterior entropy (EPE) under the prior predictive distribution of the model. We show that minimizing the EPE is equivalent to learning a conditional density estimator for the posterior as well as other information-theoretic approaches. Further summary extraction methods (including minimizing the L2 Bayes risk, maximizing the Fisher information, and model selection approaches) are special or limiting cases of EPE minimization. We demonstrate that the approach yields high fidelity summary statistics by applying it to both a synthetic benchmark as well as a population genetics problem. We not only offer concrete recommendations for practitioners but also provide a unifying perspective for obtaining informative summary statistics.




Has companion code repository: https://github.com/tillahoffmann/summaries








This page was built for publication: Minimising the Expected Posterior Entropy Yields Optimal Summary Statistics

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6401209)