scientific article; zbMATH DE number 1405266
From MaRDI portal
Publication:4938227
zbMath0959.68109MaRDI QIDQ4938227
Publication date: 23 February 2000
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Related Items (only showing first 100 items - show all)
Deep advantage learning for optimal dynamic treatment regime ⋮ Deep Neural Networks, Generic Universal Interpolation, and Controlled ODEs ⋮ Neural network approximation ⋮ Turnpike in optimal control of PDEs, ResNets, and beyond ⋮ Convergence of Physics-Informed Neural Networks Applied to Linear Second-Order Elliptic Interface Problems ⋮ A Proof that Artificial Neural Networks Overcome the Curse of Dimensionality in the Numerical Approximation of Black–Scholes Partial Differential Equations ⋮ An introduction to the use of neural networks in control systems ⋮ On the error of approximation by ridge functions with two fixed directions ⋮ Characterization of an extremal sum of ridge functions ⋮ A deep learning approach to Reduced Order Modelling of parameter dependent partial differential equations ⋮ A Sobolev-type upper bound for rates of approximation by linear combinations of Heaviside plane waves ⋮ Approximation of Sobolev classes by polynomials and ridge functions ⋮ Deep distributed convolutional neural networks: Universality ⋮ Theoretical issues in deep networks ⋮ Trading Signals in VIX Futures ⋮ Neural network with unbounded activation functions is universal approximator ⋮ Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions ⋮ Full error analysis for the training of deep neural networks ⋮ A note on the equioscillation theorem for best ridge function approximation ⋮ Wasserstein generative adversarial uncertainty quantification in physics-informed neural networks ⋮ DeepParticle: learning invariant measure by a deep neural network minimizing Wasserstein distance on data generated from an interacting particle method ⋮ A note on the applications of one primary function in deep neural networks ⋮ A shallow Ritz method for elliptic problems with singular sources ⋮ Approximation bounds for norm constrained neural networks with applications to regression and GANs ⋮ A deep network construction that adapts to intrinsic dimensionality beyond the domain ⋮ On the approximation of functions by tanh neural networks ⋮ The generalized extreme learning machines: tuning hyperparameters and limiting approach for the Moore-Penrose generalized inverse ⋮ Approximation capabilities of neural networks on unbounded domains ⋮ On the capacity of deep generative networks for approximating distributions ⋮ On sharpness of error bounds for multivariate neural network approximation ⋮ A-WPINN algorithm for the data-driven vector-soliton solutions and parameter discovery of general coupled nonlinear equations ⋮ Approximating smooth and sparse functions by deep neural networks: optimal approximation rates and saturation ⋮ Three ways to solve partial differential equations with neural networks — A review ⋮ Deep learning methods for partial differential equations and related parameter identification problems ⋮ Kähler geometry of framed quiver moduli and machine learning ⋮ A class of dimension-free metrics for the convergence of empirical measures ⋮ Neural network interpolation operators optimized by Lagrange polynomial ⋮ Applications of limiters, neural networks and polynomial annihilation in higher-order FD/FV schemes ⋮ Neural networks in Fréchet spaces ⋮ A survey on modern trainable activation functions ⋮ Optimal control by deep learning techniques and its applications on epidemic models ⋮ On the representation by linear superpositions ⋮ DeepBND: a machine learning approach to enhance multiscale solid mechanics ⋮ Convergence for a family of neural network operators in Orlicz spaces ⋮ Multi-scale fusion network: a new deep learning structure for elliptic interface problems ⋮ Mini-workshop: Analysis of data-driven optimal control. Abstracts from the mini-workshop held May 9--15, 2021 (hybrid meeting) ⋮ Control of partial differential equations via physics-informed neural networks ⋮ Approximation by sums of ridge functions with fixed directions ⋮ A three layer neural network can represent any multivariate function ⋮ Sobolev-type embeddings for neural network approximation spaces ⋮ Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ ⋮ Computing the Approximation Error for Neural Networks with Weights Varying on Fixed Directions ⋮ A Single Hidden Layer Feedforward Network with Only One Neuron in the Hidden Layer Can Approximate Any Univariate Function ⋮ Deep ReLU networks and high-order finite element methods ⋮ A collocation method for solving nonlinear Volterra integro-differential equations of neutral type by sigmoidal functions ⋮ Approximate dynamic programming for stochastic \(N\)-stage optimization with application to optimal consumption under uncertainty ⋮ Deep neural network expression of posterior expectations in Bayesian PDE inversion ⋮ DeepXDE: A Deep Learning Library for Solving Differential Equations ⋮ Approximation Properties of Ridge Functions and Extreme Learning Machines ⋮ Assessing the effectiveness of artificial neural networks on problems related to elliptic curve cryptography ⋮ Unnamed Item ⋮ Plateau Phenomenon in Gradient Descent Training of RELU Networks: Explanation, Quantification, and Avoidance ⋮ Absence of bottlenecks in a neural network determines its generic functional properties ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Extreme learning machine collocation for the numerical solution of elliptic PDEs with sharp gradients ⋮ Finite Neuron Method and Convergence Analysis ⋮ On the Convergence of Physics Informed Neural Networks for Linear Second-Order Elliptic and Parabolic Type PDEs ⋮ Variational Representations and Neural Network Estimation of Rényi Divergences ⋮ A recursive algorithm for nonlinear least-squares problems ⋮ Deep Nitsche Method: Deep Ritz Method with Essential Boundary Conditions ⋮ How Deep Are Deep Gaussian Processes? ⋮ Interpolation by neural network operators activated by ramp functions ⋮ Convergence of a family of neural network operators of the Kantorovich type ⋮ Approximation by series of sigmoidal functions with applications to neural networks ⋮ Unnamed Item ⋮ Complexity of neural network approximation with limited information: A worst case approach ⋮ Determining the number of real roots of polynomials through neural networks ⋮ Pseudo-dimension and entropy of manifolds formed by affine-invariant dictionary ⋮ Unnamed Item ⋮ Limitations of shallow nets approximation ⋮ Error bounds for approximations with deep ReLU networks ⋮ Complexity of Shallow Networks Representing Finite Mappings ⋮ Universality of deep convolutional neural networks ⋮ Equivalence of approximation by convolutional neural networks and fully-connected networks ⋮ Suboptimal Policies for Stochastic $$N$$-Stage Optimization: Accuracy Analysis and a Case Study from Optimal Consumption ⋮ MgNet: a unified framework of multigrid and convolutional neural network ⋮ Data-Driven Learning of Nonautonomous Systems ⋮ Unnamed Item ⋮ Semiglobal optimal feedback stabilization of autonomous systems via deep neural network approximation ⋮ Higher-Order Quasi-Monte Carlo Training of Deep Neural Networks ⋮ On some aspects of approximation of ridge functions ⋮ Approximation Error Analysis of Some Deep Backward Schemes for Nonlinear PDEs ⋮ Optimization with learning-informed differential equation constraints and its applications ⋮ DIFFUSION ON FRACTAL OBJECTS MODELING AND ITS PHYSICS-INFORMED NEURAL NETWORK SOLUTION ⋮ Optimal Approximation with Sparsely Connected Deep Neural Networks ⋮ New Error Bounds for Deep ReLU Networks Using Sparse Grids ⋮ Spline representation and redundancies of one-dimensional ReLU neural network models ⋮ A New Function Space from Barron Class and Application to Neural Network Approximation ⋮ A novel fully adaptive neural network modeling and implementation using colored Petri nets
This page was built for publication: