Stochastic rounding and reduced-precision fixed-point arithmetic for solving neural ordinary differential equations
DOI10.1098/rsta.2019.0052zbMath1462.65081arXiv1904.11263OpenAlexW3098574217WikidataQ92754330 ScholiaQ92754330MaRDI QIDQ4993504
Michael Hopkins, Steve B. Furber, Mantas Mikaitis, Dave R. Lester
Publication date: 15 June 2021
Published in: Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1904.11263
softwareordinary differential equationartificial intelligencedifferential equationsditherfixed-point arithmeticIzhikevich neuron modelspinnakerstochastic rounding
Numerical methods for initial value problems involving ordinary differential equations (65L05) Numerical algorithms for specific classes of architectures (65Y10)
Related Items (5)
Cites Work
- On explicit two-derivative Runge-Kutta methods
- Probabilistic rounding in neural network learning with limited precision
- Shift Register Sequences – A Retrospective Account
- TestU01
- Handbook of Floating-Point Arithmetic
- Accuracy and Stability of Numerical Algorithms
- Lectures on Finite Precision Computations
- Accuracy and Efficiency in Fixed-Point Neural ODE Solvers
- Reprint of a Note on Rounding-Off Errors
- Tapered Floating Point: A New Floating-Point Representation
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Stochastic rounding and reduced-precision fixed-point arithmetic for solving neural ordinary differential equations