Adaptive Sampling for Incremental Optimization Using Stochastic Gradient Descent
From MaRDI portal
Publication:2835640
DOI10.1007/978-3-319-24486-0_21zbMath1471.68222OpenAlexW2294540259MaRDI QIDQ2835640
Guillaume Papa, Pascal Bianchi, Stéphan Clémençon
Publication date: 30 November 2016
Published in: Lecture Notes in Computer Science (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/978-3-319-24486-0_21
Learning and adaptive systems in artificial intelligence (68T05) Approximation methods and heuristics in mathematical programming (90C59) Stochastic approximation (62L20)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Minimizing finite sums with the stochastic average gradient
- Weak convergence rates for stochastic approximation with application to multiple targets and simulated annealing
- Introductory lectures on convex optimization. A basic course.
- Robust Stochastic Approximation Approach to Stochastic Programming
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- A Stochastic Approximation Method
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
This page was built for publication: Adaptive Sampling for Incremental Optimization Using Stochastic Gradient Descent