Bounds for the tracking error of first-order online optimization methods
From MaRDI portal
Publication:2032000
DOI10.1007/s10957-021-01836-9zbMath1470.90082arXiv2003.02400OpenAlexW3136112531MaRDI QIDQ2032000
Emiliano Dall'Anese, Liam Madden, Stephen R. Becker
Publication date: 15 June 2021
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2003.02400
Tikhonov regularizationonline optimizationsmooth convex optimizationconvergence boundNesterov acceleration
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Optimized first-order methods for smooth convex minimization
- First-order methods of smooth convex optimization with inexact oracle
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Lectures on convex optimization
- Introductory lectures on convex optimization. A basic course.
- Performance of first-order methods for smooth convex minimization: a novel approach
- Adaptive restart for accelerated gradient schemes
- Gradient methods for nonstationary unconstrained optimization problems
- Non-Stationary Stochastic Optimization
- On Lower and Upper Bounds for Smooth and Strongly Convex Optimization Problems
- Accelerated and Inexact Forward-Backward Algorithms
- Multiuser Optimization: Distributed Algorithms and Error Analysis
- Online Learning and Online Convex Optimization
- Stability of Over-Relaxations for the Forward-Backward Algorithm, Application to FISTA
- Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
- Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
- Gradient Convergence in Gradient methods with Errors
- Convergence Analysis of Saddle Point Problems in Time Varying Wireless Systems— Control Theoretical Approach
- Distributed Maximum Likelihood Sensor Network Localization
- First-Order Methods in Optimization
- Prediction-Correction Algorithms for Time-Varying Constrained Optimization
- Online Learning With Inexact Proximal Online Gradient Descent Algorithms
- Online Primal-Dual Methods With Measurement Feedback for Time-Varying Convex Optimization
- Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent
- Logarithmic Regret Algorithms for Online Convex Optimization
- Understanding Machine Learning
- Some methods of speeding up the convergence of iteration methods
- Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions
- Convex analysis and monotone operator theory in Hilbert spaces
This page was built for publication: Bounds for the tracking error of first-order online optimization methods