Recent Theoretical Advances in Non-Convex Optimization
From MaRDI portal
Publication:6355806
DOI10.1007/978-3-031-00832-0_3zbMath1527.90169arXiv2012.06188OpenAlexW3113216837MaRDI QIDQ6355806
Marina Danilova, Pavel Dvurechensky, Sergey Guminov, Innokentiy Shibaev, Eduard Gorbunov, Dmitry Kamzolov, A. V. Gasnikov
Publication date: 11 December 2020
Full work available at URL: https://doi.org/10.1007/978-3-031-00832-0_3
Abstract computational complexity for mathematical programming problems (90C60) Nonconvex programming, global optimization (90C26)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Minimizing finite sums with the stochastic average gradient
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Lectures on convex optimization
- Iterative hard thresholding for compressed sensing
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- Parametric estimation. Finite sample theory
- A simple proof of the restricted isometry property for random matrices
- Generalized descent for global optimization
- Incremental gradient algorithms with stepsizes bounded away from zero
- Introductory lectures on convex optimization. A basic course.
- An accelerated directional derivative method for smooth stochastic convex optimization
- Iterative Potts minimization for the recovery of signals with discontinuities from indirect measurements: the multivariate case
- An adaptive high order method for finding third-order critical points of nonconvex optimization
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization
- Zeroth-order methods for noisy Hölder-gradient functions
- Gradient methods for problems with inexact model of the objective
- Newton-type methods for non-convex optimization under inexact Hessian information
- Lower bounds for finding stationary points I
- Lower bounds for finding stationary points II: first-order methods
- Alternating minimization methods for strongly convex optimization
- Combining stochastic adaptive cubic regularization with negative curvature for nonconvex optimization
- First-order and stochastic optimization methods for machine learning
- Accelerated primal-dual gradient descent with linesearch for convex, nonconvex, and nonsmooth optimization problems
- Generalized uniformly optimal methods for nonlinear programming
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- Random gradient-free minimization of convex functions
- Gradient descent with random initialization: fast global convergence for nonconvex phase retrieval
- Misspecified nonconvex statistical optimization for sparse phase retrieval
- Stochastic global optimization.
- Cubic regularization of Newton method and its global performance
- Exact matrix completion via convex optimization
- Lectures on Modern Convex Optimization
- Proximal Splitting Methods in Signal Processing
- Optimization with Sparsity-Inducing Penalties
- Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization
- Identifiability in Blind Deconvolution With Subspace or Sparsity Constraints
- Phase Retrieval via Wirtinger Flow: Theory and Algorithms
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Encyclopedia of Optimization
- Decoding by Linear Programming
- Introduction to Derivative-Free Optimization
- Some NP-complete problems in quadratic and nonlinear programming
- Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming
- An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule
- Trust Region Methods
- Accelerated Methods for NonConvex Optimization
- Theoretical Insights Into the Optimization Landscape of Over-Parameterized Shallow Neural Networks
- Stochastic Model-Based Minimization of Weakly Convex Functions
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
- About the Power Law of the PageRank Vector Component Distribution. Part 2. The Buckley–Osthus Model, Verification of the Power Law for This Model, and Setup of Real Search Engines
- Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization
- Optimization Methods for Large-Scale Machine Learning
- Non-convex Optimization for Machine Learning
- Black-Box Complexity of Local Minimization
- Derivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic Systems
- Stochastic Three Points Method for Unconstrained Smooth Minimization
- Finding approximate local minima faster than gradient descent
- Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise
- Manifold Gradient Descent Solves Multi-Channel Sparse Blind Deconvolution Provably and Efficiently
- On Nonconvex Optimization for Machine Learning
- An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization
- Primal–dual accelerated gradient methods with small-dimensional relaxation oracle
- Optimizing Static Linear Feedback: Gradient Method
- A concise second-order complexity analysis for unconstrained optimization using high-order regularized models
- Foundations of Data Science
- Derivative-free optimization methods
- Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step
- Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview
- Accelerated Stochastic Algorithms for Nonconvex Finite-Sum and Multiblock Optimization
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- The Power of Convex Relaxation: Near-Optimal Matrix Completion
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Optimal Inapproximability Results for MAX‐CUT and Other 2‐Variable CSPs?
- Low-rank matrix completion using alternating minimization
- Some methods of speeding up the convergence of iteration methods
- A Simplex Method for Function Minimization
- Numerical methods for finding global extrema (Case of a non-uniform mesh)
- Generalized Momentum-Based Methods: A Hamiltonian Perspective
- Inexact model: a framework for optimization and variational inequalities
- Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- First-order methods for convex optimization