A Zeroth-Order Proximal Stochastic Gradient Method for Weakly Convex Stochastic Optimization
From MaRDI portal
Publication:6066421
DOI10.1137/22m1494270arXiv2205.01633MaRDI QIDQ6066421
Spyridon Pougkakiotis, Unnamed Author
Publication date: 16 November 2023
Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2205.01633
stochastic gradient descentcomposite optimizationhyperparameter tuningzeroth-order optimizationweakly convex stochastic optimization
Nonlinear programming (90C30) Derivative-free methods and methods using generalized derivatives (90C56) Stochastic programming (90C15)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Generalized ADMM with optimal indefinite proximal term for linearly constrained convex optimization
- Residual whiteness principle for automatic parameter selection in \(\ell_2-\ell_2\) image super-resolution problems
- Convergence of a random optimization method for constrained optimization problems
- GCV for Tikhonov regularization by partial SVD
- An efficient duality-based approach for PDE-constrained sparse optimization
- \(L\)-curve curvature bounds via Lanczos bidiagonalization
- A zeroth order method for stochastic weakly convex optimization
- Noisy zeroth-order optimization for non-smooth saddle point problems
- Efficiency of minimizing compositions of convex functions and smooth maps
- Random gradient-free minimization of convex functions
- On the global and linear convergence of the generalized alternating direction method of multipliers
- Phase retrieval: stability and recovery guarantees
- Random optimization
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Strong and Weak Convexity of Sets and Functions
- Convergence and regularization results for optimal control problems with sparsity functional
- Self-calibration and biconvex compressive sensing
- Algorithm 866
- Introduction to Derivative-Free Optimization
- Minimization by Random Search Techniques
- Multivariate stochastic approximation using a simultaneous perturbation gradient approximation
- Generalized Gradients and Applications
- Variational Analysis
- Stochastic Model-Based Minimization of Weakly Convex Functions
- Pattern Search Methods for User-Provided Points: Application to Molecular Geometry Problems
- Zeroth-Order Stochastic Compositional Algorithms for Risk-Aware Learning
- IFISS: A Computational Laboratory for Investigating Incompressible Flow Problems
- Interior‐point methods and preconditioning for PDE‐constrained optimization problems involving sparsity terms
- Optimization by Direct Search in Matrix Computations
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Proximité et dualité dans un espace hilbertien
- Finding Optimal Algorithmic Parameters Using Derivative‐Free Optimization
- Zeroth-order optimization with orthogonal random directions
This page was built for publication: A Zeroth-Order Proximal Stochastic Gradient Method for Weakly Convex Stochastic Optimization