Online distributed nonconvex optimization with stochastic objective functions: high probability bound analysis of dynamic regrets
From MaRDI portal
Publication:6632505
DOI10.1016/j.automatica.2024.111863MaRDI QIDQ6632505
Yulong Wang, Hang Xu, Kaihong Lu
Publication date: 4 November 2024
Published in: Automatica (Search for Journal in Brave)
Cites Work
- Initialization-free distributed algorithms for optimal resource allocation with feasibility constraints and application to economic dispatch of power systems
- Distributed strategies for generating weight-balanced and doubly stochastic digraphs
- Distributed average consensus with least-mean-square deviation
- Comparing two versions of Markov's inequality on compact sets
- Network-based modelling and dynamic output feedback control for unmanned marine vehicles in network environments
- Distributed decision-coupled constrained optimization via proximal-tracking
- Distributed online bandit optimization under random quantization
- Distributed algorithm design for constrained resource allocation problems with high-order multi-agent systems
- Non-stationary stochastic optimization
- Online Distributed Convex Optimization on Dynamic Networks
- Robust Stochastic Approximation Approach to Stochastic Programming
- Distributed Online Convex Optimization on Time-Varying Directed Graphs
- Distributed Online Optimization in Dynamic Environments Using Mirror Descent
- Diffusion Adaptation Strategies for Distributed Optimization and Learning Over Networks
- A Saddle Point Algorithm for Networked Online Convex Optimization
- Local Prediction for Enhanced Convergence of Distributed Optimization Algorithms
- Distributed Online Optimization for Multi-Agent Networks With Coupled Inequality Constraints
- Distributed Subgradient Methods for Multi-Agent Optimization
- Online Distributed Optimization With Nonconvex Objective Functions: Sublinearity of First-Order Optimality Condition-Based Regret
- Dynamic Online Learning via Frank-Wolfe Algorithm
- Online Distributed Optimization With Strongly Pseudoconvex-Sum Cost Functions
- Distributed Variable Sample-Size Stochastic Optimization With Fixed Step-Sizes
- Distributed strategies for mixed equilibrium problems: continuous-time theoretical approaches
- Online distributed optimization with strongly pseudoconvex-sum cost functions and coupled inequality constraints
- Online Distributed Optimization With Nonconvex Objective Functions via Dynamic Regrets
- Innovation Compression for Communication-Efficient Distributed Optimization With Linear Convergence
- Convergence in high probability of distributed stochastic gradient descent algorithms
- Robust distributed optimization with randomly corrupted gradients
- DAdam: a consensus-based distributed adaptive gradient method for online optimization
This page was built for publication: Online distributed nonconvex optimization with stochastic objective functions: high probability bound analysis of dynamic regrets
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6632505)