Decentralized learning over a network with Nyström approximation using SGD
From MaRDI portal
Publication:6117024
DOI10.1016/j.acha.2023.06.005OpenAlexW4380885337MaRDI QIDQ6117024
Publication date: 19 July 2023
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.acha.2023.06.005
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness
- Sparsity in multiple kernel learning
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- Distributed kernel-based gradient descent algorithms
- Randomized sketches for kernels: fast and optimal nonparametric regression
- Optimal rates for the regularized least-squares algorithm
- DSA: Decentralized Double Stochastic Averaging Gradient Algorithm
- Online Distributed Learning Over Networks in RKH Spaces Using Random Fourier Features
- On Nonconvex Decentralized Gradient Descent
- Optimal Rates for Multi-pass Stochastic Gradient Methods
- Theory of Reproducing Kernels
- A Fast Randomized Incremental Gradient Method for Decentralized Nonconvex Optimization