Run-and-inspect method for nonconvex optimization and global optimality bounds for R-local minimizers
From MaRDI portal
Publication:2425163
DOI10.1007/s10107-019-01397-wzbMath1415.90085arXiv1711.08172OpenAlexW2963085159WikidataQ127963668 ScholiaQ127963668MaRDI QIDQ2425163
Wotao Yin, Yuejiao Sun, Yi-Fan Chen
Publication date: 26 June 2019
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1711.08172
Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30)
Uses Software
Cites Work
- Unnamed Item
- Optimization by Simulated Annealing
- Nearly unbiased variable selection under minimax concave penalty
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- GAITA: a Gauss-Seidel iterative thresholding algorithm for \(\ell_q\) regularized least squares regression
- Stochastic backward Euler: an implicit gradient descent algorithm for \(k\)-means clustering
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Coordinate-friendly structures, algorithms and applications
- Global convergence of ADMM in nonconvex nonsmooth optimization
- Deep relaxation: partial differential equations for optimizing deep neural networks
- Cubic regularization of Newton method and its global performance
- A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion
- Introduction to Derivative-Free Optimization
- Trust Region Methods
- Nonconvex Sparse Logistic Regression With Weakly Convex Regularization
- Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions
- Entropy-SGD: biasing gradient descent into wide valleys