Neural networks with ReLU powers need less depth
From MaRDI portal
Publication:6535869
DOI10.1016/J.NEUNET.2023.12.027MaRDI QIDQ6535869
Jose Ernie C. Lope, Kurt Izak M. Cabanilla, Rhudaina Z. Mohammad
Publication date: 5 March 2024
Published in: Neural Networks (Search for Journal in Brave)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Multilayer feedforward networks are universal approximators
- Approximation spaces of deep neural networks
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Error bounds for approximations with deep ReLU networks
- On the successive supersymmetric rank-1 decomposition of higher-order supersymmetric tensors
- Tensor Analysis
- Approximation by superpositions of a sigmoidal function
- Simultaneous approximation of a smooth function and its derivatives by deep neural networks with piecewise-polynomial activations
This page was built for publication: Neural networks with ReLU powers need less depth
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6535869)