scientific article; zbMATH DE number 6542787
From MaRDI portal
Publication:5744794
zbMath1351.62131MaRDI QIDQ5744794
No author found.
Publication date: 19 February 2016
Full work available at URL: http://jmlr.csail.mit.edu/papers/v16/feng15a.html
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
robust regressionstatistical learning theoryleast squares regressioncorrentropyrobust loss functionmaximum correntropy criterion
Nonparametric robustness (62G35) General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (24)
A Statistical Learning Approach to Modal Regression ⋮ On the robustness of regularized pairwise learning methods based on kernels ⋮ Gradient descent for robust kernel-based regression ⋮ Kernel-based sparse regression with the correntropy-induced loss ⋮ Error analysis on regularized regression based on the maximum correntropy criterion ⋮ Half-quadratic alternating direction method of multipliers for robust orthogonal tensor approximation ⋮ Joint learning adaptive metric and optimal classification hyperplane ⋮ Kernel-based maximum correntropy criterion with gradient descent method ⋮ Deep learning theory of distribution regression with CNNs ⋮ Robust stochastic configuration networks with maximum correntropy criterion for uncertain data regression ⋮ Nonasymptotic analysis of robust regression with modified Huber's loss ⋮ Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery ⋮ Robust Support Vector Machines for Classification with Nonconvex and Smooth Losses ⋮ A Robust Regression Framework with Laplace Kernel-Induced Loss ⋮ Calibration of \(\epsilon\)-insensitive loss in support vector machines regression ⋮ Robust kernel-based distribution regression ⋮ Robustness analysis of a maximum correntropy framework for linear regression ⋮ Learning with correntropy-induced losses for regression with mixture of symmetric stable noise ⋮ Analysis of Regression Algorithms with Unbounded Sampling ⋮ Learning under \((1 + \epsilon)\)-moment conditions ⋮ Unnamed Item ⋮ New Insights Into Learning With Correntropy-Based Regression ⋮ A Framework of Learning Through Empirical Gain Maximization ⋮ Optimal learning with Gaussians and correntropy loss
Uses Software
This page was built for publication: