Learning theory of minimum error entropy under weak moment conditions
From MaRDI portal
Publication:5037873
DOI10.1142/S0219530521500044zbMath1482.68200OpenAlexW3127795058MaRDI QIDQ5037873
Shouyou Huang, Yun-Long Feng, Qiang Wu
Publication date: 4 March 2022
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1142/s0219530521500044
entropycalibrationmoment conditionrobust learningBernstein conditionempirical U-statistics minimization
Computational learning theory (68Q32) Learning and adaptive systems in artificial intelligence (68T05)
Related Items
Compressed data separation via unconstrained l1-split analysis, Rates of approximation by ReLU shallow neural networks, Asymptotic properties of Kantorovich-type Szász-Mirakjan operators of higher order, Convergence theorems in Orlicz and Bögel continuous functions spaces by means of Kantorovich discrete type sampling operators, Some new inequalities and numerical results of bivariate Bernstein-type operator including Bézier basis and its GBS operator, On approximation of unbounded functions by certain modified Bernstein operators, Approximation properties of exponential type operators connected to \(p(x)=2x^{3/2}\), Linear combinations of two Bernstein polynomials, Error analysis of classification learning algorithms based on LUMs loss, Shape preserving properties of \((\mathfrak{p},\mathfrak{q})\) Bernstein Bèzier curves and corresponding results over \([a,b\)], On wavelet type generalized Bézier operators
Cites Work
- Consistency analysis of an empirical minimum error entropy algorithm
- The convergence rate of a regularized ranking algorithm
- Learning under \((1 + \epsilon)\)-moment conditions
- Blind source separation using Renyi's \(\alpha\)-marginal entropies.
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- A statistical learning assessment of Huber regression
- Distributed kernel gradient descent algorithm for minimum error entropy principle
- Kernel gradient descent algorithm for information theoretic learning
- Empirical minimization
- The MEE Principle in Data Classification: A Perceptron-Based Analysis
- Learning Theory
- Support Vector Machines
- Minimum Total Error Entropy Method for Parameter Estimation
- Convergence of Gradient Descent for Minimum Error Entropy Principle in Linear Regression
- A Statistical Learning Approach to Modal Regression
- New Insights Into Learning With Correntropy-Based Regression
- Information Theoretic Learning
- Online regularized pairwise learning with least squares loss
- Regularization schemes for minimum error entropy principle
- Probability Inequalities for Sums of Bounded Random Variables
- Learning rates for regularized least squares ranking algorithm
- Convexity, Classification, and Risk Bounds
- Optimal learning with Gaussians and correntropy loss
- Robust Statistics
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item