Deep learning in random neural fields: numerical experiments via neural tangent kernel
From MaRDI portal
Publication:6053432
DOI10.1016/j.neunet.2022.12.020zbMath1527.92011arXiv2202.05254MaRDI QIDQ6053432
Ryo Karakida, Kotaro Sakamoto, Kaito Watanabe, Shun-ichi Amari, Sho Sonoda
Publication date: 18 October 2023
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2202.05254
Artificial neural networks and deep learning (68T07) Neural networks for/in biological studies, artificial life and related topics (92B20)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Formation of topographic maps and columnar microstructures in nerve fields
- Self-organized formation of topologically correct feature maps
- Dynamics of pattern formation in lateral-inhibition type neural fields
- Gradient descent optimizes over-parameterized deep ReLU networks
- Propagation of chaos in neural fields
- Spatiotemporal dynamics of continuum neural fields
- Canonical representations of Gaussian processes and their applications
- A Mathematical Foundation for Statistical Neurodynamics
- A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue
- Tutorial on Neural Field Theory
- Pathological Spectra of the Fisher Information Metric and Its Variants in Deep Neural Networks
- Any Target Function Exists in a Neighborhood of Any Sufficiently Wide Random Network: A Geometrical Perspective
- Continuation of Localized Coherent Structures in Nonlocal Neural Field Equations
- Learning Patterns and Pattern Sequences by Self-Organizing Nets of Threshold Elements
- Characteristics of Random Nets of Analog Neuron-Like Elements