A note on the convergence of deterministic gradient sampling in nonsmooth optimization
From MaRDI portal
Publication:6498411
DOI10.1007/S10589-024-00552-0WikidataQ128466017 ScholiaQ128466017MaRDI QIDQ6498411
Publication date: 7 May 2024
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Cites Work
- Unnamed Item
- Nonsmooth optimization via quasi-Newton methods
- Improved convergence result for the discrete gradient and secant methods for nonsmooth optimization
- A quasi-Newton algorithm for nonconvex, nonsmooth optimization with global convergence guarantees
- An effective nonsmooth optimization algorithm for locally Lipschitz functions
- Convergence of the gradient sampling algorithm on directionally Lipschitz functions
- An adaptive gradient sampling algorithm for non-smooth optimization
- Löwner's Operator and Spectral Functions in Euclidean Jordan Algebras
- Proximity Maps for Convex Sets
- Semismooth and Semiconvex Functions in Constrained Optimization
- Optimization of lipschitz continuous functions
- An Algorithm for Constrained Optimization with Semismooth Functions
- A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization
- Approximating Subdifferentials by Random Sampling of Gradients
This page was built for publication: A note on the convergence of deterministic gradient sampling in nonsmooth optimization