A new one-point residual-feedback oracle for black-box learning and control
From MaRDI portal
Publication:2063773
DOI10.1016/j.automatica.2021.110006zbMath1480.93149arXiv2006.10820OpenAlexW3217389438MaRDI QIDQ2063773
Publication date: 3 January 2022
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2006.10820
Related Items (3)
Zeroth-order algorithms for stochastic distributed nonconvex optimization ⋮ No-regret learning for repeated non-cooperative games with lossy bandits ⋮ Privacy preserving distributed online projected residual feedback optimization over unbalanced directed graphs
Cites Work
- Extremum seeking control: convergence analysis
- Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case
- Random gradient-free minimization of convex functions
- Robust hybrid zero-order optimization algorithms with acceleration via averaging in time
- Surrogate-based distributed optimisation for expensive black-box functions
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Derivative-free optimization methods
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Unnamed Item
This page was built for publication: A new one-point residual-feedback oracle for black-box learning and control