Neural Operator: Learning Maps Between Function Spaces
From MaRDI portal
Publication:6375560
arXiv2108.08481MaRDI QIDQ6375560
Author name not available (Why is that?)
Publication date: 18 August 2021
Abstract: The classical development of neural networks has primarily focused on learning mappings between finite dimensional Euclidean spaces or finite sets. We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces. We formulate the neural operator as a composition of linear integral operators and nonlinear activation functions. We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator. The proposed neural operators are also discretization-invariant, i.e., they share the same model parameters among different discretization of the underlying function spaces. Furthermore, we introduce four classes of efficient parameterization, viz., graph neural operators, multi-pole graph neural operators, low-rank neural operators, and Fourier neural operators. An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations (PDEs). We consider standard PDEs such as the Burgers, Darcy subsurface flow, and the Navier-Stokes equations, and show that the proposed neural operators have superior performance compared to existing machine learning based methodologies, while being several orders of magnitude faster than conventional PDE solvers.
Has companion code repository: https://github.com/zongyi-li/fourier_neural_operator
This page was built for publication: Neural Operator: Learning Maps Between Function Spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6375560)