Stochastic Methods for Composite and Weakly Convex Optimization Problems
From MaRDI portal
Publication:4561227
DOI10.1137/17M1135086OpenAlexW2602608495WikidataQ128861220 ScholiaQ128861220MaRDI QIDQ4561227
Publication date: 5 December 2018
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1703.08570
Related Items (31)
Proximal methods avoid active strict saddles of weakly convex functions ⋮ A stochastic subgradient method for distributionally robust non-convex and non-smooth learning ⋮ Stochastic Multilevel Composition Optimization Algorithms with Level-Independent Convergence Rates ⋮ Graphical Convergence of Subgradients in Nonconvex Optimization and Learning ⋮ Coupled Learning Enabled Stochastic Programming with Endogenous Uncertainty ⋮ Stochastic proximal subgradient descent oscillates in the vicinity of its accumulation set ⋮ Learning with risks based on M-location ⋮ Unnamed Item ⋮ Pathological Subgradient Dynamics ⋮ Consistent approximations in composite optimization ⋮ First-order methods for convex optimization ⋮ Convergence of a stochastic subgradient method with averaging for nonsmooth nonconvex constrained optimization ⋮ An Accelerated Inexact Proximal Point Method for Solving Nonconvex-Concave Min-Max Problems ⋮ An algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity ⋮ Stochastic proximal gradient methods for nonconvex problems in Hilbert spaces ⋮ Stochastic subgradient method converges on tame functions ⋮ Characterization of solutions of strong-weak convex programming problems ⋮ Strong Metric (Sub)regularity of Karush–Kuhn–Tucker Mappings for Piecewise Linear-Quadratic Convex-Composite Optimization and the Quadratic Convergence of Newton’s Method ⋮ A zeroth order method for stochastic weakly convex optimization ⋮ Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems ⋮ Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence ⋮ Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity ⋮ An Inertial Newton Algorithm for Deep Learning ⋮ A Stochastic Subgradient Method for Nonsmooth Nonconvex Multilevel Composition Optimization ⋮ Efficiency of minimizing compositions of convex functions and smooth maps ⋮ Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization ⋮ On the Convergence of Mirror Descent beyond Stochastic Convex Programming ⋮ Computation for latent variable model estimation: a unified stochastic proximal framework ⋮ Stochastic Difference-of-Convex-Functions Algorithms for Nonconvex Programming ⋮ Sufficient conditions for a minimum of a strongly quasiconvex function on a weakly convex set ⋮ A Study of Convex Convex-Composite Functions via Infimal Convolution with Applications
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions
- Solution of nonconvex nonsmooth stochastic optimization problems
- Stochastic generalized gradient method for nonconvex nonsmooth stochastic optimization
- Introductory lectures on convex optimization. A basic course.
- Non-smooth dynamical systems
- Measurable dependence of convex sets and functions on parameters
- Stochastic optimization problems with nondifferentiable cost functionals
- A function not constant on a connected set of critical points
- PhaseLift: Exact and Stable Signal Recovery from Magnitude Measurements via Convex Programming
- Phase Retrieval via Wirtinger Flow: Theory and Algorithms
- A Linearization Method for Nonsmooth Stochastic Programming Problems
- Clarke Subgradients of Stratifiable Functions
- Critical values of set-valued maps with stratifiable graphs. Extensions of Sard and Smale-Sard theorems
- Characterizations of Łojasiewicz inequalities: Subgradient flows, talweg, convexity
- Robust Stochastic Approximation Approach to Stochastic Programming
- Descent methods for composite nondifferentiable optimization problems
- On some properties of the generalized gradient method
- First and second order conditions for a class of nondifferentiable optimization problems
- A model algorithm for composite nondifferentiable optimization problems
- Monotone Operators and the Proximal Point Algorithm
- Analysis of recursive stochastic algorithms
- Variational Analysis
- Trust Region Methods
- A variational perspective on accelerated methods in optimization
- Prox-regular functions in variational analysis
- Variational Analysis of Regular Mappings
- Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
- Stochastic Approximations and Differential Inclusions
This page was built for publication: Stochastic Methods for Composite and Weakly Convex Optimization Problems