Deprecated: $wgMWOAuthSharedUserIDs=false is deprecated, set $wgMWOAuthSharedUserIDs=true, $wgMWOAuthSharedUserSource='local' instead [Called from MediaWiki\HookContainer\HookContainer::run in /var/www/html/w/includes/HookContainer/HookContainer.php at line 135] in /var/www/html/w/includes/Debug/MWDebug.php on line 372
Mean field analysis of neural networks: a central limit theorem - MaRDI portal

Mean field analysis of neural networks: a central limit theorem

From MaRDI portal
Publication:2301498

DOI10.1016/j.spa.2019.06.003zbMath1441.60022arXiv1808.09372OpenAlexW2963791871MaRDI QIDQ2301498

Konstantinos V. Spiliopoulos, Justin A. Sirignano

Publication date: 24 February 2020

Published in: Stochastic Processes and their Applications (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1808.09372




Related Items (27)

Machine learning from a continuous viewpoint. IAlign, then memorise: the dynamics of learning with feedback alignment*Align, then memorise: the dynamics of learning with feedback alignment*Particle dual averaging: optimization of mean field neural network with global convergence rate analysis*Mean-field and kinetic descriptions of neural differential equationsMean Field Analysis of Deep Neural NetworksAsymptotics of Reinforcement Learning with Neural NetworksUnbiased Deep Solvers for Linear Parametric PDEsTraining Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation BenefitsContinuous limits of residual neural networks in case of large input dataUnnamed ItemA comparative analysis of optimization and generalization properties of two-layer neural network and random feature models under gradient descent dynamicsHigh‐dimensional limit theorems for SGD: Effective dynamics and critical scalingBenign Overfitting and Noisy FeaturesNormalization effects on deep neural networksMirror descent algorithms for minimizing interacting free energyUnnamed ItemOptimization for deep learning: an overviewLandscape and training regimes in deep learningPlateau Phenomenon in Gradient Descent Training of RELU Networks: Explanation, Quantification, and AvoidanceMachine Learning and Computational MathematicsAnalysis of a two-layer neural network via displacement convexityLinearized two-layers neural networks in high dimensionMean Field Analysis of Neural Networks: A Law of Large NumbersNormalization effects on shallow neural networks and related asymptotic expansionsDynamics of stochastic gradient descent for two-layer neural networks in the teacher–student setup*Asymptotic properties of one-layer artificial neural networks with sparse connectivity



Cites Work


This page was built for publication: Mean field analysis of neural networks: a central limit theorem