Expressive power of ReLU and step networks under floating-point operations
From MaRDI portal
Publication:6543653
DOI10.1016/J.NEUNET.2024.106297MaRDI QIDQ6543653
Sejun Park, Geonho Hwang, Yeachan Park, Won-Yeol Lee
Publication date: 24 May 2024
Published in: Neural Networks (Search for Journal in Brave)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Sharp error bounds for complex floating-point inversion
- On the capabilities of multilayer perceptrons
- Multilayer feedforward networks are universal approximators
- Stupid is as stupid does: taking the square root of the square of a floating-point number
- Exploiting Structure in Floating-Point Arithmetic
- Elementary Functions
- On relative errors of floating-point operations: Optimal bounds and applications
- Handbook of Floating-Point Arithmetic
- Memory Capacity of Neural Networks with Threshold and Rectified Linear Unit Activations
- Approximation by superpositions of a sigmoidal function
- Floating-point arithmetic
This page was built for publication: Expressive power of ReLU and step networks under floating-point operations
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6543653)