Mini-workshop: Nonlinear approximation of high-dimensional functions in scientific computing. Abstracts from the mini-workshop held October 15--20, 2023
DOI10.4171/owr/2023/48zbMATH Open1546.00069MaRDI QIDQ6613392
No author found.
Publication date: 2 October 2024
Published in: Oberwolfach Reports (Search for Journal in Brave)
Artificial neural networks and deep learning (68T07) Proceedings of conferences of miscellaneous specific interest (00B25) Proceedings, conferences, collections, etc. pertaining to numerical analysis (65-06) Collections of abstracts of lectures (00B05) Multilinear algebra, tensor calculus (15A69) Approximation by arbitrary nonlinear expressions; widths and entropy (41A46) PDEs in connection with control and optimization (35Q93) Numerical methods for low-rank matrix approximation; matrix compression (65F55)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Tensor-Train Decomposition
- TT-cross approximation for multidimensional arrays
- A mathematical introduction to compressive sensing
- Adaptive stochastic Galerkin FEM
- Approximation and estimation bounds for artificial neural networks
- Quantized tensor-structured finite elements for second-order elliptic PDEs in two dimensions
- Optimal global rates of convergence for nonparametric regression
- A distribution-free theory of nonparametric regression
- Constructive representation of functions in low-rank tensor formats
- Computing Lyapunov functions using deep neural networks
- On the rate of convergence of fully connected deep neural network regression estimates
- Proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients
- Deep composition of tensor-trains using squared inverse Rosenblatt transports
- A measure theoretical approach to the mean-field maximum principle for training NeurODEs
- Learning with tree tensor networks: complexity estimates and model selection
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Time integration of symmetric and anti-symmetric low-rank matrices and Tucker tensors
- Nonparametric regression using deep neural networks with ReLU activation function
- Error bounds for approximations with deep ReLU networks
- Variational Monte Carlo -- bridging concepts of machine learning and high-dimensional partial differential equations
- On deep learning as a remedy for the curse of dimensionality in nonparametric regression
- The asymptotic error of chaos expansion approximations for stochastic differential equations
- A proposal on machine learning via dynamical systems
- A sharp upper bound for sampling numbers in \(L_2\)
- Adaptive low-rank methods: problems on Sobolev spaces
- Tree Adaptive Approximation in the Hierarchical Tensor Format
- Hierarchical Singular Value Decomposition of Tensors
- Space-time adaptive wavelet methods for parabolic evolution problems
- Hierarchical Tensor Approximation of Output Quantities of Parameter-Dependent PDEs
- Stable architectures for deep neural networks
- HJB-POD-Based Feedback Design for the Optimal Control of Evolution Problems
- Computer Age Statistical Inference, Student Edition
- Existence of dynamical low-rank approximations to parabolic problems
- Tensor Decomposition Methods for High-dimensional Hamilton--Jacobi--Bellman Equations
- Rank Bounds for Approximating Gaussian Densities in the Tensor-Train Format
- Boosted optimal weighted least-squares
- Error estimates for DeepONets: a deep learning framework in infinite dimensions
- Deep learning: a statistical viewpoint
- Low-rank tensor methods for partial differential equations
- Data-Driven Tensor Train Gradient Cross Approximation for Hamilton–Jacobi–Bellman Equations
- On the Approximability of Koopman-Based Operator Lyapunov Equations
- Deep Learning in High Dimension: Neural Network Expression Rates for Analytic Functions in \(\pmb{L^2(\mathbb{R}^d,\gamma_d)}\)
- Convergence rates for shallow neural networks learned by gradient descent
- Scalable conditional deep inverse Rosenblatt transports using tensor trains and gradient-based dimension reduction
- Gradient descent for deep matrix factorization: dynamics and implicit bias towards low rank
This page was built for publication: Mini-workshop: Nonlinear approximation of high-dimensional functions in scientific computing. Abstracts from the mini-workshop held October 15--20, 2023