Using Taylor-approximated gradients to improve the Frank-Wolfe method for empirical risk minimization
From MaRDI portal
Publication:6579995
DOI10.1137/22m1519286MaRDI QIDQ6579995
Publication date: 29 July 2024
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
computational complexityconvex optimizationlinear predictionempirical risk minimizationFrank-Wolfelinear minimization oracle
Analysis of algorithms and problem complexity (68Q25) Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Abstract computational complexity for mathematical programming problems (90C60) Nonconvex programming, global optimization (90C26)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The landscape of empirical risk for nonconvex losses
- Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization
- Conditional Gradient Sliding for Convex Optimization
- An Extended Frank--Wolfe Method with “In-Face” Directions, and Its Application to Low-Rank Matrix Completion
- Stochastic Conditional Gradient++: (Non)Convex Minimization and Continuous Submodular Maximization
- The Elements of Statistical Learning
This page was built for publication: Using Taylor-approximated gradients to improve the Frank-Wolfe method for empirical risk minimization