scientific article; zbMATH DE number 7626754
From MaRDI portal
Publication:5053256
Yuanhan Hu, Xuefeng Gao, Mert Gürbüzbalaban, Lingjiong Zhu
Publication date: 6 December 2022
Full work available at URL: https://arxiv.org/abs/2007.00590
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
convergence ratedecentralized algorithmsLangevin dynamicsWasserstein distancestochastic gradientHamiltonian Monte Carloheavy-ball methodmomentum accelerationdecentralized Bayesian inference
Related Items (3)
The divide-and-conquer sequential Monte Carlo algorithm: theoretical properties and limit theorems ⋮ Decentralized Bayesian learning with Metropolis-adjusted Hamiltonian Monte Carlo ⋮ Distributed event-triggered unadjusted Langevin algorithm for Bayesian learning
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- First-order methods of smooth convex optimization with inexact oracle
- A class of Wasserstein metrics for probability distributions
- Introductory lectures on convex optimization. A basic course.
- Sampling from a log-concave distribution with projected Langevin Monte Carlo
- When distributed computation is communication expensive
- Deep learning: a Bayesian perspective
- Comparing consensus Monte Carlo strategies for distributed Bayesian computation
- Is there an analog of Nesterov acceleration for gradient-based MCMC?
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- On sampling from a log-concave density using kinetic Langevin diffusions
- On stochastic gradient Langevin dynamics with dependent data streams in the logconcave case
- User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
- High-dimensional Bayesian inference via the unadjusted Langevin algorithm
- Couplings and quantitative contraction rates for Langevin dynamics
- Nonasymptotic convergence analysis for the unadjusted Langevin algorithm
- Online Model Selection Based on the Variational Bayes
- The computation of averages from equilibrium and nonequilibrium Langevin molecular dynamics
- On the Convergence of Decentralized Gradient Descent
- A Differential Equation for Modeling Nesterov's Accelerated Gradient Method: Theory and Insights
- On Tests for Global Maximum of the Log-Likelihood Function
- A First Course in Bayesian Statistical Methods
- Linear Time Average Consensus and Distributed Optimization on Fixed Graphs
- Distributed Subgradient Methods for Multi-Agent Optimization
- Robustness of Accelerated First-Order Algorithms for Strongly Convex Optimization Problems
- Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point Evasion
- Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Nonconvex Stochastic Optimization: Nonasymptotic Performance Bounds and Momentum-Based Acceleration
- Global Consensus Monte Carlo
- Stochastic Gradient-Based Distributed Bayesian Estimation in Cooperative Sensor Networks
- Distributed Heavy-Ball: A Generalization and Acceleration of First-Order Methods With Gradient Tracking
- On Stochastic Gradient Langevin Dynamics with Dependent Data Streams: The Fully Nonconvex Case
- Stochastic Processes and Applications
- Optimal Algorithms for Non-Smooth Distributed Optimization in Networks
- Theoretical Guarantees for Approximate Sampling from Smooth and Log-Concave Densities
- Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions
- Optimal Transport
This page was built for publication: