Pages that link to "Item:Q4967449"
From MaRDI portal
The following pages link to A mean field view of the landscape of two-layer neural networks (Q4967449):
Displaying 50 items.
- Machine learning from a continuous viewpoint. I (Q829085) (← links)
- Analysis of a two-layer neural network via displacement convexity (Q1996787) (← links)
- Topological properties of the set of functions generated by neural networks of fixed size (Q2031060) (← links)
- A selective overview of deep learning (Q2038303) (← links)
- Linearized two-layers neural networks in high dimension (Q2039801) (← links)
- High-dimensional dynamics of generalization error in neural networks (Q2057778) (← links)
- Maximum likelihood estimation of potential energy in interacting particle systems from single-trajectory data (Q2064853) (← links)
- Reinforcement learning and stochastic optimisation (Q2072112) (← links)
- Fitting small piece-wise linear neural network models to interpolate data sets (Q2072583) (← links)
- Normalization effects on shallow neural networks and related asymptotic expansions (Q2072629) (← links)
- Mean-field Langevin dynamics and energy landscape of neural networks (Q2077356) (← links)
- Supervised learning from noisy observations: combining machine-learning techniques with data assimilation (Q2077682) (← links)
- Propagation of chaos: a review of models, methods and applications. I: Models and methods (Q2088752) (← links)
- Propagation of chaos: a review of models, methods and applications. II: Applications (Q2088753) (← links)
- Measurement error models: from nonparametric methods to deep neural networks (Q2092892) (← links)
- Stabilize deep ResNet with a sharp scaling factor \(\tau\) (Q2102389) (← links)
- Asymptotic properties of one-layer artificial neural networks with sparse connectivity (Q2105365) (← links)
- A trajectorial approach to relative entropy dissipation of McKean-Vlasov diffusions: gradient flows and HWBI inequalities (Q2108507) (← links)
- Do ideas have shape? Idea registration as the continuous limit of artificial neural networks (Q2111734) (← links)
- Representation formulas and pointwise properties for Barron functions (Q2113295) (← links)
- Surprises in high-dimensional ridgeless least squares interpolation (Q2131262) (← links)
- Loss landscapes and optimization in over-parameterized non-linear systems and neural networks (Q2134108) (← links)
- Mean-field and kinetic descriptions of neural differential equations (Q2148968) (← links)
- Sparse optimization on measures with over-parameterized gradient descent (Q2149558) (← links)
- A Riemannian mean field formulation for two-layer neural networks with batch normalization (Q2157932) (← links)
- Hessian informed mirror descent (Q2162317) (← links)
- Neural collapse with unconstrained features (Q2164655) (← links)
- A comparative analysis of optimization and generalization properties of two-layer neural network and random feature models under gradient descent dynamics (Q2197845) (← links)
- Mirror descent algorithms for minimizing interacting free energy (Q2204545) (← links)
- Mean field limit for Coulomb-type flows (Q2217890) (← links)
- Optimization for deep learning: an overview (Q2218095) (← links)
- Landscape and training regimes in deep learning (Q2231925) (← links)
- Data-driven vector soliton solutions of coupled nonlinear Schrödinger equation using a deep learning algorithm (Q2246919) (← links)
- Mean field analysis of neural networks: a central limit theorem (Q2301498) (← links)
- Polyak-Łojasiewicz inequality on the space of measures and convergence of mean-field birth-death processes (Q2694477) (← links)
- Geometric compression of invariant manifolds in neural networks (Q3382321) (← links)
- (Q4998974) (← links)
- (Q5011560) (← links)
- (Q5011561) (← links)
- Matrix inference and estimation in multi-layer models* (Q5020043) (← links)
- An analytic theory of shallow networks dynamics for hinge loss classification* (Q5020044) (← links)
- Dynamical mean-field theory for stochastic gradient descent in Gaussian mixture classification* (Q5020049) (← links)
- When do neural networks outperform kernel methods?* (Q5020050) (← links)
- Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black--Scholes Partial Differential Equations (Q5037569) (← links)
- The effective noise of stochastic gradient descent (Q5043083) (← links)
- Align, then memorise: the dynamics of learning with feedback alignment* (Q5049525) (← links)
- Revisiting Landscape Analysis in Deep Neural Networks: Eliminating Decreasing Paths to Infinity (Q5051381) (← links)
- (Q5053253) (← links)
- (Q5054655) (← links)
- Two-Layer Neural Networks with Values in a Banach Space (Q5055293) (← links)