Relative loss bounds for on-line density estimation with the exponential family of distributions
From MaRDI portal
Publication:5945695
DOI10.1023/A:1010896012157zbMath0988.68173arXiv1301.6677OpenAlexW1506313179MaRDI QIDQ5945695
Katy S. Azoury, Manfred K. Warmuth
Publication date: 22 July 2002
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1301.6677
Learning and adaptive systems in artificial intelligence (68T05) Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.) (68T20)
Related Items (34)
Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory ⋮ Deformed statistics Kullback-Leibler divergence minimization within a scaled Bregman framework ⋮ Adaptive and optimal online linear regression on \(\ell^1\)-balls ⋮ Learning noisy linear classifiers via adaptive and selective sampling ⋮ Nonstationary online convex optimization with multiple predictions ⋮ “Calibeating”: Beating forecasters at their own game ⋮ Multiclass classification with bandit feedback using adaptive regularization ⋮ Aggregating Algorithm for a Space of Analytic Functions ⋮ Online Learning Based on Online DCA and Application to Online Classification ⋮ Weighted last-step min-max algorithm with improved sub-logarithmic regret ⋮ Kernelization of matrix updates, when and how? ⋮ A generalized online mirror descent with applications to classification and regression ⋮ Leading strategies in competitive on-line prediction ⋮ Unnamed Item ⋮ Smooth calibration, leaky forecasts, finite recall, and Nash dynamics ⋮ Sequential model aggregation for production forecasting ⋮ Unnamed Item ⋮ A quasi-Bayesian perspective to online clustering ⋮ Classification into Kullback-Leibler balls in exponential families ⋮ Learning rates of gradient descent algorithm for classification ⋮ A primal-dual perspective of online learning algorithms ⋮ Monte Carlo Information-Geometric Structures ⋮ New aspects of Bregman divergence in regression and classification with parametric and nonparametric estimation ⋮ An Upper Bound for Aggregating Algorithm for Regression with Changing Dependencies ⋮ Online regularized generalized gradient classification algorithms ⋮ Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices ⋮ Unnamed Item ⋮ On the Bias, Risk, and Consistency of Sample Means in Multi-armed Bandits ⋮ Suboptimality of constrained least squares and improvements via non-linear predictors ⋮ Quasiconvex Jensen Divergences and Quasiconvex Bregman Divergences ⋮ Adaptive and self-confident on-line learning algorithms ⋮ Relative expected instantaneous loss bounds ⋮ Distribution-free robust linear regression ⋮ Computing statistical divergences with sigma points
This page was built for publication: Relative loss bounds for on-line density estimation with the exponential family of distributions