Analysis of two gradient-based algorithms for on-line regression
From MaRDI portal
Publication:1970199
DOI10.1006/JCSS.1999.1635zbMath0961.68148OpenAlexW2033547792WikidataQ59538608 ScholiaQ59538608MaRDI QIDQ1970199
Publication date: 19 March 2000
Published in: Journal of Computer and System Sciences (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1006/jcss.1999.1635
Related Items (9)
Adaptive and optimal online linear regression on \(\ell^1\)-balls ⋮ Optimal anytime regret with two experts ⋮ Randomized prediction of individual sequences ⋮ RECURSIVE FORECAST COMBINATION FOR DEPENDENT HETEROGENEOUS DATA ⋮ A continuous-time approach to online optimization ⋮ Sequential model aggregation for production forecasting ⋮ A quasi-Bayesian perspective to online clustering ⋮ Learning to assign degrees of belief in relational domains ⋮ Adaptive and self-confident on-line learning algorithms
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Exponentiated gradient versus gradient descent for linear predictors
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- A game of prediction with expert advice
- The weighted majority algorithm
- On-line learning of linear functions
- Universal prediction of individual sequences
- How to use expert advice
- A decision-theoretic extension of stochastic complexity and its applications to learning
- Probability Inequalities for Sums of Bounded Random Variables
- Convergence of stochastic processes
- Relative loss bounds for multidimensional regression problems
This page was built for publication: Analysis of two gradient-based algorithms for on-line regression