Fast Convergence Rates for Distributed Non-Bayesian Learning
From MaRDI portal
Publication:4566974
DOI10.1109/TAC.2017.2690401zbMath1458.62116arXiv1508.05161OpenAlexW2963118811MaRDI QIDQ4566974
Angelia Nedić, Alex Olshevsky, César A. Uribe
Publication date: 27 June 2018
Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1508.05161
Estimation in multivariate analysis (62H12) Bayesian inference (62F15) Stochastic learning and adaptive control (93E35) Distributed algorithms (68W15)
Related Items (14)
Personalized optimization with user's feedback ⋮ A stochastic averaging gradient algorithm with multi‐step communication for distributed optimization ⋮ Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs ⋮ Graph-theoretic approaches for analyzing the resilience of distributed control systems: a tutorial and survey ⋮ Differentially private distributed online learning over time‐varying digraphs via dual averaging ⋮ Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization ⋮ Min-max optimization over slowly time-varying graphs ⋮ Decentralized optimization over slowly time-varying graphs: algorithms and lower bounds ⋮ Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs ⋮ Distributed Bayesian filtering using logarithmic opinion pool for dynamic sensor networks ⋮ Towards accelerated rates for distributed optimization over time-varying networks ⋮ Distributed consensus-based multi-agent convex optimization via gradient tracking technique ⋮ Distributed stochastic gradient tracking methods ⋮ A dual approach for optimal algorithms in distributed optimization over networks
This page was built for publication: Fast Convergence Rates for Distributed Non-Bayesian Learning