Hyper-flexible convolutional neural networks based on generalized Lehmer and power means
From MaRDI portal
Publication:6488726
DOI10.1016/J.NEUNET.2022.08.017WikidataQ116846987 ScholiaQ116846987MaRDI QIDQ6488726
Vladyslav Branytskyi, Mariia Golovianko, Vagan Terziyan, Diana Malyk
Publication date: 18 October 2023
Published in: Neural Networks (Search for Journal in Brave)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Weak monotonicity of Lehmer and Gini means
- Power Function Error Initialization Can Improve Convergence of Backpropagation Learning in Neural Networks for Classification
- Multibranch Formal Neuron: An Internally Nonlinear Learning Unit
- Flexible Transmitter Network
- A Correspondence Between Normalization Strategies in Artificial and Biological Neural Networks
- On PDE Characterization of Smooth Hierarchical Functions Computed by Neural Networks
- Simple Convolutional-Based Models: Are They Learning the Task or the Data?
- The unreasonable effectiveness of deep learning in artificial intelligence
- Minimal Spiking Neuron for Solving Multilabel Classification Tasks
- Integration of Stochastic Models by Minimizing α-Divergence
This page was built for publication: Hyper-flexible convolutional neural networks based on generalized Lehmer and power means