Hidden unit specialization in layered neural networks: ReLU vs. sigmoidal activation
From MaRDI portal
Publication:2068413
DOI10.1016/j.physa.2020.125517OpenAlexW2981298641MaRDI QIDQ2068413
Michiel Straat, Michael Biehl, Elisa Oostwal
Publication date: 19 January 2022
Published in: Physica A (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1910.07476
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Statistical physics and representations in real and artificial neural networks
- Statistical Mechanics of Learning
- Functional optimization of online algorithms in multilayer neural networks
- Storage capacity of the fully-connected committee machine
- Learning from examples in fully connected committee machines
- Transient dynamics of on-line learning in two-layered neural networks
- Learning by on-line gradient descent
- Learning dynamics on different timescales
- Mean-field inference methods for neural networks
- Phase Transitions in Machine Learning
- The committee machine: computational to statistical gaps in learning a two-layers neural network
- Approximation by superpositions of a sigmoidal function
- The elements of statistical learning. Data mining, inference, and prediction
This page was built for publication: Hidden unit specialization in layered neural networks: ReLU vs. sigmoidal activation