Convergence guarantees for forward gradient descent in the linear regression model
From MaRDI portal
Publication:6592792
DOI10.1016/j.jspi.2024.106174zbMATH Open1543.62496MaRDI QIDQ6592792
Johannes Schmidt-Hieber, Thijs Bos
Publication date: 26 August 2024
Published in: Journal of Statistical Planning and Inference (Search for Journal in Brave)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Random design analysis of ridge regression
- Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
- Random gradient-free minimization of convex functions
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Introduction to Derivative-Free Optimization
- Acceleration of Stochastic Approximation by Averaging
- Introduction to Stochastic Search and Optimization
- On the Averaged Stochastic Approximation for Linear Regression
- Derivative-free optimization methods
- Learning Theory and Kernel Machines
- An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity
This page was built for publication: Convergence guarantees for forward gradient descent in the linear regression model