Deep Neural Network Approximation Theory

From MaRDI portal
Publication:5001568

DOI10.1109/TIT.2021.3062161zbMath1473.68178arXiv1901.02220OpenAlexW3133816032MaRDI QIDQ5001568

Dennis Elbrächter, Dmytro Perekrestenko, Helmut Bölcskei, Philipp Grohs

Publication date: 22 July 2021

Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1901.02220




Related Items (41)

Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black--Scholes Partial Differential EquationsNeural network approximationMetric entropy limits on recurrent neural network learning of linear dynamical systemsOn sharpness of an error bound for deep ReLU network approximationScientific machine learning through physics-informed neural networks: where we are and what's nextVariational physics informed neural networks: the role of quadratures and test functionsApproximations with deep neural networks in Sobolev time-spaceFull error analysis for the training of deep neural networksDeep ReLU neural network approximation in Bochner spaces and applications to parametric PDEsConvergence of deep convolutional neural networksRelaxation approach for learning neural network regularizers for a class of identification problemsPhysics-informed neural networks for approximating dynamic (hyperbolic) PDEs of second order in time: error analysis and algorithmsExponential Convergence of Deep Operator Networks for Elliptic Partial Differential EquationsDeep Learning in High Dimension: Neural Network Expression Rates for Analytic Functions in \(\pmb{L^2(\mathbb{R}^d,\gamma_d)}\)Drift estimation for a multi-dimensional diffusion process using deep neural networksInvariant spectral foliations with applications to model order reduction and synthesisPhase transitions in rate distortion theory and deep learningA multivariate Riesz basis of ReLU neural networksCollocation approximation by deep neural ReLU networks for parametric and stochastic PDEs with lognormal inputsHierarchical regularization networks for sparsification based learning on noisy datasetsNeural network approximation and estimation of classifiers with classification boundary in a Barron classData-driven reduced order models using invariant foliations, manifolds and autoencodersSolving Kolmogorov PDEs without the curse of dimensionality via deep learning and asymptotic expansion with Malliavin calculusDeep learning for inverse problems with unknown operatorComputation and learning in high dimensions. Abstracts from the workshop held August 1--7, 2021 (hybrid meeting)Sobolev-type embeddings for neural network approximation spacesApproximation in shift-invariant spaces with deep ReLU neural networksIntegral representations of shallow neural network with Rectified Power Unit activation functionGabor neural networks with proven approximation propertiesSolving the Kolmogorov PDE by means of deep learningOn the rate of convergence of fully connected deep neural network regression estimatesHigh-dimensional distribution generation through deep neural networksSolving PDEs by variational physics-informed neural networks: an a posteriori error analysisDesign of the monodomain model by artificial neural networksA phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networksA measure theoretical approach to the mean-field maximum principle for training NeurODEsExplicitly antisymmetrized neural network layers for variational Monte Carlo simulationOptimal Approximation with Sparsely Connected Deep Neural NetworksDNN expression rate analysis of high-dimensional PDEs: application to option pricingA theoretical analysis of deep neural networks and parametric PDEsRobust and resource-efficient identification of two hidden layer neural networks




This page was built for publication: Deep Neural Network Approximation Theory