Relative loss bounds for on-line density estimation with the exponential family of distributions

From MaRDI portal
Publication:5945695

DOI10.1023/A:1010896012157zbMath0988.68173arXiv1301.6677OpenAlexW1506313179MaRDI QIDQ5945695

Katy S. Azoury, Manfred K. Warmuth

Publication date: 22 July 2002

Published in: Machine Learning (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1301.6677




Related Items (34)

Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theoryDeformed statistics Kullback-Leibler divergence minimization within a scaled Bregman frameworkAdaptive and optimal online linear regression on \(\ell^1\)-ballsLearning noisy linear classifiers via adaptive and selective samplingNonstationary online convex optimization with multiple predictions“Calibeating”: Beating forecasters at their own gameMulticlass classification with bandit feedback using adaptive regularizationAggregating Algorithm for a Space of Analytic FunctionsOnline Learning Based on Online DCA and Application to Online ClassificationWeighted last-step min-max algorithm with improved sub-logarithmic regretKernelization of matrix updates, when and how?A generalized online mirror descent with applications to classification and regressionLeading strategies in competitive on-line predictionUnnamed ItemSmooth calibration, leaky forecasts, finite recall, and Nash dynamicsSequential model aggregation for production forecastingUnnamed ItemA quasi-Bayesian perspective to online clusteringClassification into Kullback-Leibler balls in exponential familiesLearning rates of gradient descent algorithm for classificationA primal-dual perspective of online learning algorithmsMonte Carlo Information-Geometric StructuresNew aspects of Bregman divergence in regression and classification with parametric and nonparametric estimationAn Upper Bound for Aggregating Algorithm for Regression with Changing DependenciesOnline regularized generalized gradient classification algorithmsExact minimax risk for linear least squares, and the lower tail of sample covariance matricesUnnamed ItemOn the Bias, Risk, and Consistency of Sample Means in Multi-armed BanditsSuboptimality of constrained least squares and improvements via non-linear predictorsQuasiconvex Jensen Divergences and Quasiconvex Bregman DivergencesAdaptive and self-confident on-line learning algorithmsRelative expected instantaneous loss boundsDistribution-free robust linear regressionComputing statistical divergences with sigma points




This page was built for publication: Relative loss bounds for on-line density estimation with the exponential family of distributions