Spiking Neural Networks in the Alexiewicz Topology: A New Perspective on Analysis and Error Bounds

From MaRDI portal
Publication:6510032

arXiv2305.05772MaRDI QIDQ6510032

Bernhard A. Moser, Michael Lunglmayr


Abstract: In order to ease the analysis of error propagation in neuromorphic computing and to get a better understanding of spiking neural networks (SNN), we address the problem of mathematical analysis of SNNs as endomorphisms that map spike trains to spike trains. A central question is the adequate structure for a space of spike trains and its implication for the design of error measurements of SNNs including time delay, threshold deviations, and the design of the reinitialization mode of the leaky-integrate-and-fire (LIF) neuron model. First we identify the underlying topology by analyzing the closure of all sub-threshold signals of a LIF model. For zero leakage this approach yields the Alexiewicz topology, which we adopt to LIF neurons with arbitrary positive leakage. As a result LIF can be understood as spike train quantization in the corresponding norm. This way we obtain various error bounds and inequalities such as a quasi isometry relation between incoming and outgoing spike trains. Another result is a Lipschitz-style global upper bound for the error propagation and a related resonance-type phenomenon.




Has companion code repository: https://github.com/lunglmayrmoser/alexsnn








This page was built for publication: Spiking Neural Networks in the Alexiewicz Topology: A New Perspective on Analysis and Error Bounds

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6510032)