Learning under \((1 + \epsilon)\)-moment conditions
From MaRDI portal
Publication:778021
DOI10.1016/j.acha.2020.05.009zbMath1442.62150OpenAlexW3033779185MaRDI QIDQ778021
Publication date: 30 June 2020
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.acha.2020.05.009
learning theoryrobust regressionempirical risk minimizationmean regression calibrationno-free lunch theoremweak moment condition
Nonparametric robustness (62G35) General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (9)
Learning theory of minimum error entropy under weak moment conditions ⋮ A Statistical Learning Approach to Modal Regression ⋮ Fast rates of minimum error entropy with heavy-tailed noise ⋮ Nonasymptotic analysis of robust regression with modified Huber's loss ⋮ Robust pairwise learning with Huber loss ⋮ Robust kernel-based distribution regression ⋮ New Insights Into Learning With Correntropy-Based Regression ⋮ A Framework of Learning Through Empirical Gain Maximization ⋮ Functional linear regression with Huber loss
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Robust structure identification and variable selection in partial linear varying coefficient models
- Consistency analysis of an empirical minimum error entropy algorithm
- Empirical risk minimization for heavy-tailed losses
- Estimating normal means with symmetric gain functions
- An efficient and robust variable selection method for longitudinal generalized linear models
- Robust estimation and empirical likelihood inference with exponential squared loss for panel data models
- Learning from MOM's principles: Le Cam's approach
- Robust estimation for the varying coefficient partially nonlinear models
- Universality of deep convolutional neural networks
- Learning with correntropy-induced losses for regression with mixture of symmetric stable noise
- Robust exponential squared loss-based estimation in semi-functional linear regression models
- Learning rates of least-square regularized regression
- Estimation After Selection Under Reflected Normal Loss Function
- Non‐Gaussian Statistical Communication Theory
- Learning Theory
- Modern statistical estimation via oracle inequalities
- Support Vector Machines
- A new concentration result for regularized risk minimizers
- Robust regression using iteratively reweighted least-squares
- Techniques for nonlinear least squares and robust regression
- The reflected normal loss function
- Deep distributed convolutional neural networks: Universality
- Correntropy: Properties and Applications in Non-Gaussian Signal Processing
- Robust Non-Rigid Point Set Registration Using Spatially Constrained Gaussian Fields
- On Spiring's normal loss function
- A Statistical Learning Approach to Modal Regression
- Deep neural networks for rotation-invariance approximation and learning
- Robust Variable Selection With Exponential Squared Loss
- Robust Statistics
- Robust Estimation of a Location Parameter
- Robust Statistics
This page was built for publication: Learning under \((1 + \epsilon)\)-moment conditions