Deep ReLU neural networks overcome the curse of dimensionality for partial integrodifferential equations
From MaRDI portal
Publication:5873924
DOI10.1142/S0219530522500129OpenAlexW3163741067WikidataQ114072416 ScholiaQ114072416MaRDI QIDQ5873924
Publication date: 10 February 2023
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2102.11707
option pricingcurse of dimensionalityjump-diffusion processdeep neural networkpartial integrodifferential equationexpression rate
Artificial neural networks and deep learning (68T07) Integro-partial differential equations (45K05) Computational methods for stochastic equations (aspects of stochastic analysis) (60H35)
Related Items
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Feynman-Kac-type formula for Lévy processes with discontinuous killing rates
- Numerical methods for Lévy processes
- Numerical solution of stochastic differential equations with jumps in finance
- Stochastic differential equations of jump type and Lévy processes in diffeomorphisms group
- On best approximation by ridge functions
- The Euler scheme for Lévy driven stochastic differential equations
- Efficient approximation of solutions of parametric linear transport equations by ReLU DNNs
- DNN expression rate analysis of high-dimensional PDEs: application to option pricing
- The Barron space and the flow-induced function spaces for neural network models
- Exponential ReLU DNN expression of holomorphic maps in high dimension
- High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations
- Machine learning for fast and reliable solution of time-dependent differential equations
- Kolmogorov width decay and poor approximators in machine learning: shallow neural networks, random feature models and neural tangent kernels
- Deep ReLU network expression rates for option prices in high-dimensional, exponential Lévy models
- Strong convergence of the Euler-Maruyama approximation for a class of Lévy-driven SDEs
- Error bounds for approximations with deep ReLU networks
- On multilevel Picard numerical approximations for high-dimensional nonlinear parabolic partial differential equations and high-dimensional nonlinear backward stochastic differential equations
- Feynman-Kac representation for Hamilton-Jacobi-Bellman IPDE
- Numerical methods for nonlinear stochastic differential equations with jumps
- Approximations of small jumps of Lévy processes with a view towards simulation
- The Euler Scheme for Feller Processes
- Backward stochastic differential equations and integral-partial differential equations
- Lévy Processes and Stochastic Calculus
- Universal approximation bounds for superpositions of a sigmoidal function
- Deep learning volatility: a deep neural network perspective on pricing and calibration in (rough) volatility models
- Uniform error estimates for artificial neural network approximations for heat equations
- Deep ReLU networks and high-order finite element methods
- Overcoming the curse of dimensionality in the numerical approximation of semilinear parabolic partial differential equations
- Deep hedging
- Breaking the Curse of Dimensionality with Convex Neural Networks
- Deep optimal stopping