Randomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in \(O(\sqrt{n})\) iterations
From MaRDI portal
Publication:2113265
DOI10.4171/MSL/29zbMath1493.62350arXiv1908.10935MaRDI QIDQ2113265
Publication date: 11 March 2022
Published in: Mathematical Statistics and Learning (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1908.10935
Hypothesis testing in multivariate analysis (62H15) Minimax procedures in statistical decision theory (62C20)
Related Items (5)
Iterative algorithm for discrete structure recovery ⋮ Optimal estimation of high-dimensional Gaussian location mixtures ⋮ Sharp global convergence guarantees for iterative nonconvex optimization with random data ⋮ Optimal estimation and computational limit of low-rank Gaussian mixtures ⋮ Sharp optimal recovery in the two component Gaussian mixture model
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On the rate of convergence in Wasserstein distance of the empirical measure
- Convergence rates of parameter estimation for some weakly identifiable finite mixtures
- Statistical guarantees for the EM algorithm: from population to sample-based analysis
- Choosing starting values for the EM algorithm for getting the highest likelihood in multivariate Gaussian mixture models
- Choosing initial values for the EM algorithm for finite mixtures
- The transportation cost from the uniform measure to the empirical measure in dimension \(\geq 3\)
- Information-theoretic determination of minimax rates of convergence
- Rates of convergence for the Gaussian mixture sieve.
- Adaptive estimation of a quadratic functional by model selection.
- Entropies and rates of convergence for maximum likelihood and Bayes estimation for mixtures of normal densities.
- The landscape of empirical risk for nonconvex losses
- Singularity, misspecification and the convergence rate of EM
- On the nonparametric maximum likelihood estimator for Gaussian location mixture densities with application to Gaussian denoising
- Optimal estimation of Gaussian mixtures via denoised method of moments
- Gradient descent with random initialization: fast global convergence for nonconvex phase retrieval
- Sparse PCA: optimal rates and adaptive estimation
- Dissipation of Information in Channels With Input Constraints
- Mixture Densities, Maximum Likelihood and the EM Algorithm
- Asymptotic Statistics
- Trust Region Methods
- Functional Properties of Minimum Mean-Square Error and Mutual Information
This page was built for publication: Randomly initialized EM algorithm for two-component Gaussian mixture achieves near optimality in \(O(\sqrt{n})\) iterations