An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and Its Implications to Second-Order Methods
From MaRDI portal
Publication:2848187
DOI10.1137/110833786zbMath1298.90071OpenAlexW2043093325MaRDI QIDQ2848187
Renato D. C. Monteiro, Benar Fux Svaiter
Publication date: 25 September 2013
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/110833786
variational inequalitymaximal monotone operatorconvex programmingproximal pointergodic convergenceaccelerated gradienthybridextragradientaccelerated Newton
Convex programming (90C25) Monotone operators and generalizations (47H05) Applications of operator theory in optimization, convex analysis, mathematical programming, economics (47N10)
Related Items
An adaptive accelerated first-order method for convex optimization, Tensor methods for finding approximate stationary points of convex functions, A Hybrid Proximal Extragradient Self-Concordant Primal Barrier Method for Monotone Variational Inequalities, Oracle complexity separation in convex optimization, Variants of the A-HPE and large-step A-HPE algorithms for strongly convex problems with applications to accelerated high-order tensor methods, Iterative Methods for the Elastography Inverse Problem of Locating Tumors, A note on approximate accelerated forward-backward methods with absolute and relative errors, and possibly strongly convex objectives, Unnamed Item, Accelerated meta-algorithm for convex optimization problems, Generalizing the Optimized Gradient Method for Smooth Convex Minimization, A projection algorithm for non-monotone variational inequalities, Superfast second-order methods for unconstrained convex optimization, Bregman Proximal Point Algorithm Revisited: A New Inexact Version and Its Inertial Variant, Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization, An efficient adaptive accelerated inexact proximal point method for solving linearly constrained nonconvex composite problems, A new convergence analysis and perturbation resilience of some accelerated proximal forward–backward algorithms with errors, Convergence rates of accelerated proximal gradient algorithms under independent noise, Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization, Reachability of optimal convergence rate estimates for high-order numerical convex optimization methods, Super-Universal Regularized Newton Method, Smooth monotone stochastic variational inequalities and saddle point problems: a survey, Principled analyses and design of first-order methods with inexact proximal operators, First-order methods for convex optimization, Another Look at the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA), On FISTA with a relative error rule, Lower bounds for finding stationary points I, A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization, Adaptive Catalyst for Smooth Convex Optimization, Lower bounds for finding stationary points II: first-order methods, Contracting Proximal Methods for Smooth Convex Optimization, Implementable tensor methods in unconstrained convex optimization, An Average Curvature Accelerated Composite Gradient Method for Nonconvex Smooth Composite Optimization Problems, Zero-convex functions, perturbation resilience, and subgradient projections for feasibility-seeking methods, A proximal-Newton method for unconstrained convex optimization in Hilbert spaces, Inexact proximal \(\epsilon\)-subgradient methods for composite convex optimization problems, Solutions to inexact resolvent inclusion problems with applications to nonlinear analysis and optimization, Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions, A FISTA-type accelerated gradient algorithm for solving smooth nonconvex composite optimization problems, Near-Optimal Hyperfast Second-Order Method for Convex Optimization, Complexity of a Quadratic Penalty Accelerated Inexact Proximal Point Method for Solving Linearly Constrained Nonconvex Composite Programs, Oracle complexity of second-order methods for smooth convex optimization, Unified Acceleration of High-Order Algorithms under General Hölder Continuity, A control-theoretic perspective on optimal high-order optimization, Higher-Order Methods for Convex-Concave Min-Max Optimization and Monotone Variational Inequalities, High-Order Optimization Methods for Fully Composite Problems, An Optimal High-Order Tensor Method for Convex Optimization, A variant of the hybrid proximal extragradient method for solving strongly monotone inclusions and its complexity analysis, Accelerated proximal envelopes: application to componentwise methods, On the computational efficiency of catalyst accelerated coordinate descent