Power Function Error Initialization Can Improve Convergence of Backpropagation Learning in Neural Networks for Classification
From MaRDI portal
Publication:5033514
DOI10.1162/NECO_A_01407OpenAlexW3167758927MaRDI QIDQ5033514
Publication date: 23 February 2022
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/neco_a_01407
Related Items (2)
On the antiderivatives of \(x^p/(1 - x)\) with an application to optimize loss functions for classification with neural networks ⋮ Hyper-flexible convolutional neural networks based on generalized Lehmer and power means
This page was built for publication: Power Function Error Initialization Can Improve Convergence of Backpropagation Learning in Neural Networks for Classification