The EM Algorithm gives Sample-Optimality for Learning Mixtures of Well-Separated Gaussians
From MaRDI portal
Publication:6333957
arXiv2002.00329MaRDI QIDQ6333957
Author name not available (Why is that?)
Publication date: 2 February 2020
Abstract: We consider the problem of spherical Gaussian Mixture models with components when the components are well separated. A fundamental previous result established that separation of is necessary and sufficient for identifiability of the parameters with polynomial sample complexity (Regev and Vijayaraghavan, 2017). In the same context, we show that samples suffice for any , closing the gap from polynomial to linear, and thus giving the first optimal sample upper bound for the parameter estimation of well-separated Gaussian mixtures. We accomplish this by proving a new result for the Expectation-Maximization (EM) algorithm: we show that EM converges locally, under separation . The previous best-known guarantee required separation (Yan, et al., 2017). Unlike prior work, our results do not assume or use prior knowledge of the (potentially different) mixing weights or variances of the Gaussian components. Furthermore, our results show that the finite-sample error of EM does not depend on non-universal quantities such as pairwise distances between means of Gaussian components.
This page was built for publication: The EM Algorithm gives Sample-Optimality for Learning Mixtures of Well-Separated Gaussians
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6333957)