From inexact optimization to learning via gradient concentration
From MaRDI portal
Publication:2111477
DOI10.1007/s10589-022-00408-5OpenAlexW3166001443MaRDI QIDQ2111477
Nicole Mücke, Lorenzo Rosasco, Bernhard Stankewitz
Publication date: 16 January 2023
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2106.05397
Related Items (2)
Towards an automatic uncertainty compiler ⋮ Special issue for SIMAI 2020-2021: large-scale optimization and applications
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nonparametric stochastic approximation with large step-sizes
- Statistical guarantees for the EM algorithm: from population to sample-based analysis
- Optimal rates for regularization of statistical inverse learning problems
- On regularization algorithms in learning theory
- Early stopping for statistical inverse problems via truncated SVD estimation
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- On early stopping in gradient descent learning
- Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates
- Iterative Regularization for Learning with Convex Loss Functions
- A Vector-Contraction Inequality for Rademacher Complexities
- Convergence rates of Kernel Conjugate Gradient for random design regression
- Support Vector Machines
- Gradient Convergence in Gradient methods with Errors
- High-Dimensional Statistics
- Optimal Rates for Multi-pass Stochastic Gradient Methods
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
- High-Dimensional Probability
- 10.1162/153244302760200704
- 10.1162/153244303321897690
- Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent
- Robust Estimation via Robust Gradient Estimation
- Early Stopping for Kernel Boosting Algorithms: A General Analysis With Localized Complexities
- Understanding Machine Learning
- An Iteration Formula for Fredholm Integral Equations of the First Kind
This page was built for publication: From inexact optimization to learning via gradient concentration