Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent
From MaRDI portal
Publication:4969072
zbMath1498.68261arXiv1809.06958MaRDI QIDQ4969072
Dominic Richards, Patrick Rebeschini
Publication date: 5 October 2020
Full work available at URL: https://arxiv.org/abs/1809.06958
algorithmic stabilitydistributed machine learninggeneralisation boundsimplicit regularisationmulti-agent optimisation
Learning and adaptive systems in artificial intelligence (68T05) Stochastic programming (90C15) Distributed algorithms (68W15)
Related Items (2)
Understanding Implicit Regularization in Over-Parameterized Single Index Model ⋮ From inexact optimization to learning via gradient concentration
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Nonparametric stochastic approximation with large step-sizes
- An optimal method for stochastic composite optimization
- Distributed stochastic subgradient projection algorithms for convex optimization
- Decentralized estimation and control of graph connectivity for mobile sensor networks
- Online gradient descent learning algorithms
- A finite sample distribution-free performance bound for local discrimination rules
- Introductory lectures on convex optimization. A basic course.
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
- DSA: Decentralized Double Stochastic Averaging Gradient Algorithm
- Iterative Regularization for Learning with Convex Loss Functions
- Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Data-Dependent Convergence for Consensus Stochastic Optimization
- Optimal Rates for Multi-pass Stochastic Gradient Methods
- 10.1162/153244302760200704
- Distributed Subgradient Methods for Multi-Agent Optimization
- On Distributed Averaging Algorithms and Quantization Effects
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Distributed Subgradient Methods for Convex Optimization Over Random Networks
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- Optimal Distributed Online Prediction using Mini-Batches
This page was built for publication: Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent