scientific article; zbMATH DE number 7049722
zbMath1483.62072arXiv1605.02832MaRDI QIDQ4633011
Publication date: 2 May 2019
Full work available at URL: https://arxiv.org/abs/1605.02832
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
continuum limitbackward heat equationWasserstein geometryrepresentation learningdenoising autoencoderridgelet analysisflow representation
Nonparametric estimation (62G05) Artificial neural networks and deep learning (68T07) Initial value problems, existence, uniqueness, continuous dependence and continuation of solutions to ordinary differential equations (34A12) Variational problems in a geometric measure-theoretic setting (49Q20) Spaces of measures, convergence of measures (28A33)
Related Items (4)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Computational Optimal Transport: With Applications to Data Science
- Wasserstein geometry of Gaussian measures
- Why does deep and cheap learning work so well?
- Bayesian learning for neural networks
- Complexity estimates based on integral transforms induced by computational units
- Nonparametric regression using deep neural networks with ReLU activation function
- Error bounds for approximations with deep ReLU networks
- Neural network with unbounded activation functions is universal approximator
- Improved minimax predictive densities under Kullback-Leibler loss
- Polar factorization and monotone rearrangement of vector‐valued functions
- Universal approximation bounds for superpositions of a sigmoidal function
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- GSNs: generative stochastic networks
- Stable architectures for deep neural networks
- A Connection Between Score Matching and Denoising Autoencoders
- An Introduction to Variational Autoencoders
- Breaking the Curse of Dimensionality with Convex Neural Networks
- On the Equivalence between Kernel Quadrature Rules and Random Feature Expansions
- Optimal Transport
- Introduction to nonparametric estimation
This page was built for publication: