Accelerated zero-order SGD method for solving the black box optimization problem under ``overparametrization condition
From MaRDI portal
Publication:6588732
DOI10.1007/978-3-031-47859-8_6MaRDI QIDQ6588732
Aleksandr Lobanov, A. V. Gasnikov
Publication date: 16 August 2024
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- An optimal method for stochastic composite optimization
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- Noisy zeroth-order optimization for non-smooth saddle point problems
- Zeroth-order methods for noisy Hölder-gradient functions
- An optimal algorithm for stochastic strongly-convex optimization
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Introduction to Derivative-Free Optimization
- Derivative-Free and Blackbox Optimization
- Optimization Methods for Large-Scale Machine Learning
- Black Box Optimization, Machine Learning, and No-Free Lunch Theorems
- Optimal Algorithms for Non-Smooth Distributed Optimization in Networks
- Reconciling modern machine-learning practice and the classical bias–variance trade-off
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
- Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems
- A Stochastic Approximation Method
- Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems
- Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact
- Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs
- Neural tangent kernel: convergence and generalization in neural networks (invited paper)
Related Items (1)
This page was built for publication: Accelerated zero-order SGD method for solving the black box optimization problem under ``overparametrization condition