Distribution Compression in Near-linear Time
From MaRDI portal
Publication:6383107
arXiv2111.07941MaRDI QIDQ6383107
Author name not available (Why is that?)
Publication date: 15 November 2021
Abstract: In distribution compression, one aims to accurately summarize a probability distribution using a small number of representative points. Near-optimal thinning procedures achieve this goal by sampling points from a Markov chain and identifying points with discrepancy to . Unfortunately, these algorithms suffer from quadratic or super-quadratic runtime in the sample size . To address this deficiency, we introduce Compress++, a simple meta-procedure for speeding up any thinning algorithm while suffering at most a factor of in error. When combined with the quadratic-time kernel halving and kernel thinning algorithms of Dwivedi and Mackey (2021), Compress++ delivers points with integration error and better-than-Monte-Carlo maximum mean discrepancy in time and space. Moreover, Compress++ enjoys the same near-linear runtime given any quadratic-time input and reduces the runtime of super-quadratic algorithms by a square-root factor. In our benchmarks with high-dimensional Monte Carlo samples and Markov chains targeting challenging differential equation posteriors, Compress++ matches or nearly matches the accuracy of its input algorithm in orders of magnitude less time.
Has companion code repository: https://github.com/microsoft/goodpoints
This page was built for publication: Distribution Compression in Near-linear Time
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6383107)