Geometrical interpretation of the predictor-corrector type algorithms in structured optimization problems
From MaRDI portal
Publication:3426228
DOI10.1080/02331930600815884zbMath1108.49027OpenAlexW2035100079MaRDI QIDQ3426228
Jérôme Malick, Warren L. Hare, Aris Daniilidis
Publication date: 8 March 2007
Published in: Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/02331930600815884
Newton-type methodsproximal algorithmRiemannian gradient\(\mathcal U\)-Lagrangianpartly smooth function
Nonlinear programming (90C30) Newton-type methods (49M15) Nonsmooth analysis (49J52) Methods of successive quadratic programming type (90C55)
Related Items
A Trust-region Method for Nonsmooth Nonconvex Optimization, Nonsmoothness in machine learning: specific structure, proximal identification, and applications, A proximal method for composite minimization, The degrees of freedom of partly smooth regularizers, Forward-Backward Envelope for the Sum of Two Nonconvex Functions: Further Properties and Nonmonotone Linesearch Algorithms, Newton acceleration on manifolds identified by proximal gradient methods, Harnessing Structure in Composite Nonsmooth Minimization, A Chain Rule for Strict Twice Epi-Differentiability and Its Applications, Activity Identification and Local Linear Convergence of Forward--Backward-type Methods, Derivative-free optimization methods for finite minimax problems, On the interplay between acceleration and identification for the proximal gradient algorithm, Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods
Cites Work
- Finite convergence of algorithms for nonlinear programs and variational inequalities
- Minimizing a differentiable function over a differential manifold
- A \(\mathcal{VU}\)-algorithm for convex minimization
- Newton methods for nonsmooth convex minimization: connections among \(\mathcal U\)-Lagrangian, Riemannian Newton and SQP methods
- On $\mathcalVU$-theory for Functions with Primal-Dual Gradient Structure
- Identifiable Surfaces in Constrained Optimization
- On the Identification of Active Constraints II: The Nonconvex Case
- Projected gradient methods for linearly constrained problems
- On the Identification of Active Constraints
- Variational Analysis
- Newton's method on Riemannian manifolds: covariant alpha theory
- The 𝒰-Lagrangian of a convex function
- The $\U$-Lagrangian of the Maximum Eigenvalue Function
- Generalized Hessian Properties of Regularized Nonsmooth Functions
- Active Sets, Nonsmoothness, and Sensitivity
- Prox-regular functions in variational analysis
- ON MATRICES DEPENDING ON PARAMETERS
- On a Class of Nonsmooth Composite Functions