A Subgradient Method Based on Gradient Sampling for Solving Convex Optimization Problems
DOI10.1080/01630563.2015.1086788zbMath1333.90093OpenAlexW1863473538WikidataQ58028352 ScholiaQ58028352MaRDI QIDQ2795104
Chee-Khian Sim, Yao-Hua Hu, Xiao Qi Yang
Publication date: 18 March 2016
Published in: Numerical Functional Analysis and Optimization (Search for Journal in Brave)
Full work available at URL: https://researchportal.port.ac.uk/portal/en/publications/a-subgradient-method-based-on-gradient-sampling-for-solving-convex-optimization-problems(fee1da4c-8548-497e-b834-8eb58696353e).html
Numerical mathematical programming methods (65K05) Convex programming (90C25) Numerical methods based on nonlinear programming (49M37)
Related Items (2)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Primal-dual subgradient methods for convex problems
- Convergence of a generalized subgradient method for nondifferentiable convex optimization
- Convergence of a simple subgradient level method
- On the convergence of conditional \(\varepsilon\)-subgradient methods for convex programs and convex-concave saddle-point problems.
- Two numerical methods for optimizing matrix stability
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Augmented Lagrangian duality and nondifferentiable optimization methods in nonconvex programming
- On a modified subgradient algorithm for dual problems via sharp augmented Lagrangian
- Inexact subgradient methods for quasi-convex optimization problems
- The Efficiency of Ballstep Subgradient Level Methods for Convex Optimization
- Incremental Subgradient Methods for Nondifferentiable Optimization
- Information Relaxations, Duality, and Convex Stochastic Dynamic Programs
- An aggregate subgradient method for nonsmooth convex minimization
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods
- Variational Analysis
- Ergodic convergence in subgradient optimization
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- What is the Subdifferential of the Closed Convex Hull of a Function?
- The Efficiency of Subgradient Projection Methods for Convex Optimization, Part I: General Level Methods
- The Efficiency of Subgradient Projection Methods for Convex Optimization, Part II: Implementations and Extensions
- A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization
- Lagrangian Relaxation via Ballstep Subgradient Methods
- A merit function approach to the subgradient method with averaging
- Approximating Subdifferentials by Random Sampling of Gradients
This page was built for publication: A Subgradient Method Based on Gradient Sampling for Solving Convex Optimization Problems