An error analysis for deep binary classification with sigmoid loss
From MaRDI portal
Publication:6588360
DOI10.1016/j.ins.2024.121166MaRDI QIDQ6588360
Changshi Li, Jerry Zhijian Yang, Yu Ling Jiao
Publication date: 15 August 2024
Published in: Information Sciences (Search for Journal in Brave)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Fast learning rates for plug-in classifiers
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Weak convergence and empirical processes. With applications to statistics
- On the rate of convergence of fully connected deep neural network regression estimates
- On deep learning as a remedy for the curse of dimensionality in nonparametric regression
- Local Rademacher complexities
- How to compare different loss functions and their risks
- Support Vector Machines
- Robust Truncated Hinge Loss Support Vector Machines
- Neural Network Learning
- Boosting in the Presence of Outliers: Adaptive Classification With Nonconvex Loss Functions
- Mathematical Foundations of Infinite-Dimensional Statistical Models
- Deep Neural Networks for Estimation and Inference
- Deep neural networks for rotation-invariance approximation and learning
- Convexity, Classification, and Risk Bounds
- Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping
- Fast convergence rates of deep neural networks for classification
This page was built for publication: An error analysis for deep binary classification with sigmoid loss