An inexact restoration-nonsmooth algorithm with variable accuracy for stochastic nonsmooth convex optimization problems in machine learning and stochastic linear complementarity problems
DOI10.1016/j.cam.2022.114943OpenAlexW4309775533MaRDI QIDQ2112678
Tijana Ostojić, Nataša Krklec Jerinkić, Nataša Krejić
Publication date: 11 January 2023
Published in: Journal of Computational and Applied Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2103.13651
nonsmooth optimizationsubgradientsample average approximationinexact restorationvariable sample size
Numerical mathematical programming methods (65K05) Convex programming (90C25) Nonlinear programming (90C30) Numerical optimization and variational techniques (65K10) Stochastic programming (90C15)
Uses Software
Cites Work
- Unnamed Item
- Feasible smooth method based on Barzilai-Borwein method for stochastic linear complementarity problem
- Adaptive stepsizes for recursive estimation with applications in approximate dynamic programming
- An adaptive Monte Carlo algorithm for computing mixed logit estimators
- Robust solution of monotone stochastic linear complementarity problems
- A new line search inexact restoration approach for nonlinear programming
- Stochastic algorithms with Armijo stepsizes for minimization of functions
- A bundle-Newton method for nonsmooth unconstrained minimization
- Optimization algorithm with probabilistic estimation
- Variable metric bundle methods: From conceptual to implementable forms
- On the worst-case evaluation complexity of non-monotone line search algorithms
- Line search methods with variable sample size for unconstrained optimization
- Inexact-restoration algorithm for constrained optimization
- Inexact restoration with subsampled trust-region methods for finite-sum minimization
- Penalty variable sample size method for solving optimization problems with equality constraints in a form of mathematical expectation
- Barzilai–Borwein method with variable sample size for stochastic linear complementarity problems
- Inexact Restoration approach for minimization with inexact evaluation of the objective function
- An adaptive gradient sampling algorithm for non-smooth optimization
- A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning
- Introduction to Nonsmooth Optimization
- Convergence of the Gradient Sampling Algorithm for Nonsmooth Nonconvex Optimization
- Variable-sample methods for stochastic optimization
- Analysis of limited-memory BFGS on a class of nonsmooth convex functions
- A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization
- Lectures on Stochastic Programming: Modeling and Theory, Third Edition
- Stochastic Optimization Methods
- On the Complexity of an Inexact Restoration Method for Constrained Optimization
- Analysis of the gradient method with an Armijo–Wolfe line search on a class of non-smooth convex functions
- A Stochastic Line Search Method with Expected Complexity Analysis
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
- Approximating Subdifferentials by Random Sampling of Gradients
- Subsampled inexact Newton methods for minimizing large sums of convex functions
This page was built for publication: An inexact restoration-nonsmooth algorithm with variable accuracy for stochastic nonsmooth convex optimization problems in machine learning and stochastic linear complementarity problems