On the antiderivatives of \(x^p/(1 - x)\) with an application to optimize loss functions for classification with neural networks
DOI10.1007/s10472-022-09786-2OpenAlexW4221067452MaRDI QIDQ2122774
Publication date: 7 April 2022
Published in: Annals of Mathematics and Artificial Intelligence (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10472-022-09786-2
classificationincomplete beta functionsupervised learninghypergeometric functiondeep learningcrossentropypower error loss function
Learning and adaptive systems in artificial intelligence (68T05) Neural networks for/in biological studies, artificial life and related topics (92B20) Pattern recognition, speech recognition (68T10) Integrals of Riemann, Stieltjes and Lebesgue type (26A42) Incomplete beta and gamma functions (error functions, probability integral, Fresnel integrals) (33B20) Artificial intelligence (68Txx)
Uses Software
Cites Work
- Uniform representations of the incomplete beta function in terms of elementary functions
- Novelty, Information and Surprise
- Uniform Asymptotic Expansions of the Incomplete Gamma Functions and the Incomplete Beta Function
- Taylor expansion of the accumulated rounding error
- Power Function Error Initialization Can Improve Convergence of Backpropagation Learning in Neural Networks for Classification
- Learning representations by back-propagating errors
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: On the antiderivatives of \(x^p/(1 - x)\) with an application to optimize loss functions for classification with neural networks