Parallel Optimization Techniques for Machine Learning
DOI10.1007/978-3-030-43736-7_13zbMath1448.68374OpenAlexW3038327432MaRDI QIDQ3300501
Sudhir Kylasa, Chih-Hao Fang, Fred Roosta, Ananth Grama
Publication date: 29 July 2020
Published in: Parallel Algorithms in Computational Science and Engineering (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-3-030-43736-7_13
trust-region frameworknatural gradientADMM-based optimization frameworkKronecker factored approximated curvature (KFAC)sub-sampled Newton-type methods
Numerical optimization and variational techniques (65K10) Learning and adaptive systems in artificial intelligence (68T05) Parallel numerical computation (65Y05)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Sample size selection in optimization methods for machine learning
- On the limited memory BFGS method for large scale optimization
- Introductory lectures on convex optimization. A basic course.
- A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning
- On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning
- Large-Scale Machine Learning with Stochastic Gradient Descent
- An Algorithm for Least-Squares Estimation of Nonlinear Parameters
- Invex functions and constrained local minima
- Updating Quasi-Newton Matrices with Limited Storage
- Numerical Optimization
- Trust Region Methods
- Optimization Methods for Large-Scale Machine Learning
- Understanding Machine Learning
- Some methods of speeding up the convergence of iteration methods
- A Stochastic Approximation Method
- A method for the solution of certain non-linear problems in least squares
- The elements of statistical learning. Data mining, inference, and prediction
This page was built for publication: Parallel Optimization Techniques for Machine Learning