One-point gradient-free methods for smooth and non-smooth saddle-point problems
From MaRDI portal
Publication:2117626
DOI10.1007/978-3-030-77876-7_10zbMath1487.90612arXiv2103.00321OpenAlexW3176874790MaRDI QIDQ2117626
Aleksandr Beznosikov, Vasilii Novitskii, Alexander V. Gasnikov
Publication date: 22 March 2022
Full work available at URL: https://arxiv.org/abs/2103.00321
Stochastic programming (90C15) Complementarity and equilibrium problems and variational inequalities (finite dimensions) (aspects of mathematical programming) (90C33)
Cites Work
- Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case
- Random gradient-free minimization of convex functions
- Lectures on Modern Convex Optimization
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
This page was built for publication: One-point gradient-free methods for smooth and non-smooth saddle-point problems