Mean Field Analysis of Neural Networks: A Law of Large Numbers
From MaRDI portal
Publication:5219306
DOI10.1137/18M1192184zbMath1440.60008arXiv1805.01053OpenAlexW3010825589WikidataQ114847156 ScholiaQ114847156MaRDI QIDQ5219306
Justin A. Sirignano, Konstantinos V. Spiliopoulos
Publication date: 11 March 2020
Published in: SIAM Journal on Applied Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1805.01053
Computational methods for problems pertaining to probability theory (60-08) Strong limit theorems (60F15) Neural nets and related approaches to inference from stochastic processes (62M45)
Related Items (35)
Nonlocal cross-diffusion systems for multi-species populations and networks ⋮ Mehler’s Formula, Branching Process, and Compositional Kernels of Deep Neural Networks ⋮ The Continuous Formulation of Shallow Neural Networks as Wasserstein-Type Gradient Flows ⋮ Deep learning: a statistical viewpoint ⋮ Surprises in high-dimensional ridgeless least squares interpolation ⋮ Two-Layer Neural Networks with Values in a Banach Space ⋮ Sparse optimization on measures with over-parameterized gradient descent ⋮ Mean Field Analysis of Deep Neural Networks ⋮ Asymptotics of Reinforcement Learning with Neural Networks ⋮ Large Sample Mean-Field Stochastic Optimization ⋮ A rigorous framework for the mean field limit of multilayer neural networks ⋮ A class of dimension-free metrics for the convergence of empirical measures ⋮ Sharp uniform-in-time propagation of chaos ⋮ Continuous limits of residual neural networks in case of large input data ⋮ Online parameter estimation for the McKean-Vlasov stochastic differential equation ⋮ A blob method for inhomogeneous diffusion with applications to multi-agent control and sampling ⋮ Non-mean-field Vicsek-type models for collective behavior ⋮ Stochastic gradient descent with noise of machine learning type. II: Continuous time analysis ⋮ Normalization effects on deep neural networks ⋮ Gradient descent on infinitely wide neural networks: global convergence and generalization ⋮ Unnamed Item ⋮ Landscape and training regimes in deep learning ⋮ Mean Field Limits for Interacting Diffusions with Colored Noise: Phase Transitions and Spectral Numerical Methods ⋮ Fast Non-mean-field Networks: Uniform in Time Averaging ⋮ Unnamed Item ⋮ A selective overview of deep learning ⋮ Reinforcement learning and stochastic optimisation ⋮ Normalization effects on shallow neural networks and related asymptotic expansions ⋮ Mean-field Langevin dynamics and energy landscape of neural networks ⋮ Supervised learning from noisy observations: combining machine-learning techniques with data assimilation ⋮ Propagation of chaos: a review of models, methods and applications. I: Models and methods ⋮ Propagation of chaos: a review of models, methods and applications. II: Applications ⋮ Asymptotic properties of one-layer artificial neural networks with sparse connectivity ⋮ Suboptimal Local Minima Exist for Wide Neural Networks with Smooth Activations ⋮ Representation formulas and pointwise properties for Barron functions
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Machine learning strategies for systems with invariance properties
- Heterogeneous credit portfolios and the dynamics of the aggregate losses
- Large portfolio losses: A dynamic contagion model
- Approximation and estimation bounds for artificial neural networks
- McKean-Vlasov limit for interacting random processes in random media.
- A stochastic McKean-Vlasov equation for absorbing diffusions on the half-line
- Multilayer feedforward networks are universal approximators
- Large deviations and mean-field theory for asymmetric random recurrent neural networks
- Kinetic equilibration rates for granular media and related equations: entropy dissipation and mass transportation estimates
- Default clustering in large portfolios: typical events
- DGM: a deep learning algorithm for solving partial differential equations
- Mean-field Langevin dynamics and energy landscape of neural networks
- Mean field analysis of neural networks: a central limit theorem
- Particle systems with a singular mean-field self-excitation. Application to neuronal networks
- Mean-Field Limit of a Stochastic Particle System Smoothly Interacting Through Threshold Hitting-Times and Applications to Neural Networks with Dendritic Component
- The Variational Formulation of the Fokker--Planck Equation
- A mean field view of the landscape of two-layer neural networks
- LARGE PORTFOLIO ASYMPTOTICS FOR LOSS FROM DEFAULT
- Universal features of price formation in financial markets: perspectives from deep learning
- Systemic Risk in Interbanking Networks
- Reynolds averaged turbulence modelling using deep neural networks with embedded invariance
This page was built for publication: Mean Field Analysis of Neural Networks: A Law of Large Numbers