Adaptive and self-confident on-line learning algorithms
From MaRDI portal
Publication:1604218
DOI10.1006/jcss.2001.1795zbMath1006.68162OpenAlexW2055639053WikidataQ59538604 ScholiaQ59538604MaRDI QIDQ1604218
Claudio Gentile, Peter Auer, Nicolò Cesa-Bianchi
Publication date: 4 July 2002
Published in: Journal of Computer and System Sciences (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1006/jcss.2001.1795
Related Items (20)
Improved second-order bounds for prediction with expert advice ⋮ Adaptive and optimal online linear regression on \(\ell^1\)-balls ⋮ Stochastic optimization for real time service capacity allocation under random service demand ⋮ Forecasting electricity consumption by aggregating specialized experts ⋮ Dynamic regret of adaptive gradient methods for strongly convex problems ⋮ Scale-free online learning ⋮ Approachability, regret and calibration: implications and equivalences ⋮ Aggregating Algorithm for a Space of Analytic Functions ⋮ A generalized online mirror descent with applications to classification and regression ⋮ Leading strategies in competitive on-line prediction ⋮ A continuous-time approach to online optimization ⋮ Sequential model aggregation for production forecasting ⋮ Internal regret in on-line portfolio selection ⋮ Internal regret in on-line portfolio selection ⋮ Competing with wild prediction rules ⋮ Regret to the best vs. regret to the average ⋮ Unnamed Item ⋮ Scale-Free Algorithms for Online Linear Optimization ⋮ Unnamed Item ⋮ Small-Loss Bounds for Online Learning with Partial Information
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- An iterative row-action method for interval convex programming
- A game of prediction with expert advice
- Tracking the best disjunction
- Tracking the best expert
- The weighted majority algorithm
- On-line learning of linear functions
- Support-vector networks
- Derandomizing stochastic prediction strategies
- Analysis of two gradient-based algorithms for on-line regression
- Queries and concept learning
- 10.1162/15324430260185600
- The Perceptron: A Model for Brain Functioning. I
- How to use expert advice
- A decision-theoretic extension of stochastic complexity and its applications to learning
- Relative loss bounds for on-line density estimation with the exponential family of distributions
- Relative loss bounds for multidimensional regression problems
This page was built for publication: Adaptive and self-confident on-line learning algorithms