scientific article; zbMATH DE number 7370542
From MaRDI portal
Publication:4998897
Publication date: 9 July 2021
Full work available at URL: https://jmlr.csail.mit.edu/papers/v22/19-716.html
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Related Items (2)
Dimension independent excess risk by stochastic gradient descent ⋮ From inexact optimization to learning via gradient concentration
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nonparametric stochastic approximation with large step-sizes
- On regularization algorithms in learning theory
- Multi-kernel regularized classifiers
- Fast rates for support vector machines using Gaussian kernels
- Online gradient descent learning algorithms
- Optimal aggregation of classifiers in statistical learning.
- Optimal rates for the regularized least-squares algorithm
- Statistical performance of support vector machines
- On early stopping in gradient descent learning
- Iterative Regularization for Learning with Convex Loss Functions
- Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression
- Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
- Learning Theory
- Support Vector Machines
- Online Regularized Classification Algorithms
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
- Robust Stochastic Approximation Approach to Stochastic Programming
- Optimal Rates for Multi-pass Stochastic Gradient Methods
- Optimization Methods for Large-Scale Machine Learning
- 10.1162/153244302760200704
- 10.1162/1532443041424319
- 10.1162/153244303321897690
- Learning Theory
- Convexity, Classification, and Risk Bounds
- Theory of Reproducing Kernels
This page was built for publication: