Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
From MaRDI portal
Publication:3465237
DOI10.1137/15M1009597zbMath1329.90103arXiv1408.3595MaRDI QIDQ3465237
Laurent Lessard, Benjamin Recht, Andrew K. Packard
Publication date: 21 January 2016
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1408.3595
convex optimizationsemidefinite programmingcontrol theoryintegral quadratic constraintsNesterov's methodfirst-order methodsproximal gradient methodsheavy-ball method
Semidefinite programming (90C22) Convex programming (90C25) Nonlinear programming (90C30) Nonlinear systems in control theory (93C10) Stability of control systems (93D99)
Related Items
Robustness analysis of uncertain discrete‐time systems with dissipation inequalities and integral quadratic constraints, Zames-Falb multipliers for absolute stability: from O'Shea's contribution to convex searches, Analysis of a generalised expectation–maximisation algorithm for Gaussian mixture models: a control systems perspective, A frequency-domain analysis of inexact gradient methods, Synthesis of accelerated gradient algorithms for optimization and saddle point problems using Lyapunov functions and LMIs, Zames–Falb multipliers for convergence rate: motivating example and convex searches, Unnamed Item, Differentially Private Accelerated Optimization Algorithms, Generalizing the Optimized Gradient Method for Smooth Convex Minimization, Optimal deterministic algorithm generation, Explicit stabilised gradient descent for faster strongly convex optimisation, Adaptive restart of the optimized gradient method for convex optimization, Exact worst-case convergence rates of the proximal gradient method for composite convex minimization, Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints, Analytical convergence regions of accelerated gradient descent in nonconvex optimization under regularity condition, A regularization interpretation of the proximal point method for weakly convex functions, Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization, On data-driven stabilization of systems with nonlinearities satisfying quadratic constraints, Convergence Rates of the Heavy Ball Method for Quasi-strongly Convex Optimization, An optimal gradient method for smooth strongly convex minimization, A zeroing neural dynamics based acceleration optimization approach for optimizers in deep neural networks, Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods, A distributed accelerated optimization algorithm over time‐varying directed graphs with uncoordinated step‐sizes, A fixed step distributed proximal gradient push‐pull algorithm based on integral quadratic constraint, On the necessity and sufficiency of discrete-time O'Shea-Zames-Falb multipliers, Fast gradient method for low-rank matrix estimation, A Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex Optimization, Branch-and-bound performance estimation programming: a unified methodology for constructing optimal optimization methods, Optimal step length for the maximal decrease of a self-concordant function by the Newton method, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization, Heavy-ball-based optimal thresholding algorithms for sparse linear inverse problems, Heavy-ball-based hard thresholding algorithms for sparse signal recovery, Uniting Nesterov and heavy ball methods for uniform global asymptotic stability of the set of minimizers, Understanding a Class of Decentralized and Federated Optimization Algorithms: A Multirate Feedback Control Perspective, Perturbed Fenchel duality and first-order methods, A forward-backward algorithm with different inertial terms for structured non-convex minimization problems, Principled analyses and design of first-order methods with inexact proximal operators, Convergence of gradient algorithms for nonconvex \(C^{1+ \alpha}\) cost functions, Conic linear optimization for computer-assisted proofs. Abstracts from the workshop held April 10--16, 2022, Computation of invariant sets for discrete‐time uncertain systems, Another Look at the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA), An Optimal First Order Method Based on Optimal Quadratic Averaging, Contractivity of Runge--Kutta Methods for Convex Gradient Systems, Worst-Case Convergence Analysis of Inexact Gradient and Newton Methods Through Semidefinite Programming Performance Estimation, Mini-workshop: Analysis of data-driven optimal control. Abstracts from the mini-workshop held May 9--15, 2021 (hybrid meeting), Efficient first-order methods for convex minimization: a constructive approach, Operator Splitting Performance Estimation: Tight Contraction Factors and Optimal Parameter Selection, Multiscale Analysis of Accelerated Gradient Methods, Convergence Rates of Proximal Gradient Methods via the Convex Conjugate, Convergence of first-order methods via the convex conjugate, A simple PID-based strategy for particle swarm optimization algorithm, Stability analysis by dynamic dissipation inequalities: on merging frequency-domain techniques with time-domain conditions, Unnamed Item, A review of nonlinear FFT-based computational homogenization methods, Convergence rates for an inertial algorithm of gradient type associated to a smooth non-convex minimization, Smooth strongly convex interpolation and exact worst-case performance of first-order methods, Unnamed Item, On the convergence analysis of the optimized gradient method, On the Asymptotic Linear Convergence Speed of Anderson Acceleration, Nesterov Acceleration, and Nonlinear GMRES, An introduction to continuous optimization for imaging, Proximal Methods for Sparse Optimal Scoring and Discriminant Analysis, Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions, Analysis of biased stochastic gradient descent using sequential semidefinite programs, Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods, Analysis of Optimization Algorithms via Integral Quadratic Constraints: Nonstrongly Convex Problems, Bounds for the tracking error of first-order online optimization methods, Regularized nonlinear acceleration, Analysis of optimization algorithms via sum-of-squares, Projected Dynamical Systems on Irregular, Non-Euclidean Domains for Nonlinear Optimization, Search Direction Correction with Normalized Gradient Makes First-Order Methods Faster, Unnamed Item, Bearing-only distributed localization: a unified barycentric approach, Iterative pre-conditioning for expediting the distributed gradient-descent method: the case of linear least-squares problem, Connections between Georgiou and Smith's Robust Stability Type Theorems and the Nonlinear Small-Gain Theorems, Learning-based adaptive control with an accelerated iterative adaptive law, The Connections Between Lyapunov Functions for Some Optimization Algorithms and Differential Equations, On polarization-based schemes for the FFT-based computational homogenization of inelastic materials, A dynamical view of nonlinear conjugate gradient methods with applications to FFT-based computational micromechanics, Understanding the acceleration phenomenon via high-resolution differential equations, From differential equation solvers to accelerated first-order methods for convex optimization, A control-theoretic perspective on optimal high-order optimization, Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions, Passivity-based analysis of the ADMM algorithm for constraint-coupled optimization, Robust and structure exploiting optimisation algorithms: an integral quadratic constraint approach, An adaptive Polyak heavy-ball method, On the convergence analysis of aggregated heavy-ball method, Convex Synthesis of Accelerated Gradient Algorithms, On the Convergence Rate of Incremental Aggregated Gradient Algorithms, Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Gradient methods for minimizing composite functions
- First-order methods of smooth convex optimization with inexact oracle
- Absolute stability of nonlinear systems of automatic control
- Nonconvex optimization problem: The infinite-horizon linear-quadratic control problem with quadratic constraints
- Method of centers for minimizing generalized eigenvalues
- The long-step method of analytic centers for fractional problems
- Introductory lectures on convex optimization. A basic course.
- Templates for convex cone problems with applications to sparse signal recovery
- The complex structured singular value
- Transient cool-down of a porous medium in pulsating flow
- Performance of first-order methods for smooth convex minimization: a novel approach
- Semidefinite programming relaxations and algebraic optimization in control
- Dissipative dynamical systems. I: General theory
- Dissipative dynamical systems. II: Linear systems with quadratic supply rates
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Stability Analysis With Dissipation Inequalities and Integral Quadratic Constraints
- Graph Implementations for Nonsmooth Convex Programs
- Dualities in Convex Algebraic Geometry
- Robust Stochastic Approximation Approach to Stochastic Programming
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- LYAPUNOV FUNCTIONS FOR THE PROBLEM OF LUR'E IN AUTOMATIC CONTROL
- Linear Matrix Inequalities in System and Control Theory
- System analysis via integral quadratic constraints
- Zames-Falb Multipliers for Quadratic Programming
- Stability Conditions for Systems with Monotone and Slope-Restricted Nonlinearities