Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria
DOI10.1007/s10107-019-01432-wzbMath1459.65083arXiv1610.03446OpenAlexW2977046017WikidataQ127217092 ScholiaQ127217092MaRDI QIDQ2220664
Dmitriy Drusvyatskiy, Alexander D. Ioffe, Adrian S. Lewis
Publication date: 25 January 2021
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1610.03446
nonsmooth optimizationerror estimationEkeland's principleKurdyka-Łojasiewicz inequalityTaylor-like model
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Numerical optimization and variational techniques (65K10)
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- An inexact successive quadratic approximation method for L-1 regularized optimization
- Gradient methods for minimizing composite functions
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- Quadratic growth and critical point stability of semi-algebraic functions
- Transversality and alternating projections for nonconvex sets
- Accelerating the cubic regularization of Newton's method on convex problems
- On gradients of functions definable in o-minimal structures
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Nonsmooth equations in optimization. Regularity, calculus, methods and applications
- From error bounds to the complexity of first-order descent methods for convex functions
- On the variational principle
- A Gauss-Newton method for convex composite optimization
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Efficiency of minimizing compositions of convex functions and smooth maps
- Cubic regularization of Newton method and its global performance
- Curves of Descent
- On the Evaluation Complexity of Composite Function Minimization with Applications to Nonconvex Nonlinear Programming
- Convergence of an Inexact Algorithm for Composite Nonsmooth Optimization
- Clarke Subgradients of Stratifiable Functions
- Implicit Functions and Solution Mappings
- Characterizations of Łojasiewicz inequalities: Subgradient flows, talweg, convexity
- On the global convergence of trust region algorithms for unconstrained minimization
- On the superlinear convergence of a trust region algorithm for nonsmooth optimization
- Descent methods for composite nondifferentiable optimization problems
- A model algorithm for composite nondifferentiable optimization problems
- Proximal Subgradients, Marginal Values, and Augmented Lagrangians in Nonconvex Optimization
- Monotone Operators and the Proximal Point Algorithm
- Optimization of lipschitz continuous functions
- Upper-Lipschitz multifunctions and inverse subdifferentials
- Metric regularity and subdifferential calculus
- Stochastic Model-Based Minimization of Weakly Convex Functions
- Prox-regular functions in variational analysis
- Variational Analysis of Regular Mappings
- Solving (most) of a set of quadratic equalities: composite optimization for robust phase retrieval
- Second-order growth, tilt stability, and metric regularity of the subdifferential
- Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
- Composite Optimization by Nonconvex Majorization-Minimization
- A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization
- Modified Gauss–Newton scheme with worst case guarantees for global performance
This page was built for publication: Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria