Out-of-distributional risk bounds for neural operators with applications to the Helmholtz equation

From MaRDI portal
Publication:6424544

arXiv2301.11509MaRDI QIDQ6424544

Author name not available (Why is that?)

Publication date: 26 January 2023

Abstract: Despite their remarkable success in approximating a wide range of operators defined by PDEs, existing neural operators (NOs) do not necessarily perform well for all physics problems. We focus here on high-frequency waves to highlight possible shortcomings. To resolve these, we propose a subfamily of NOs enabling an enhanced empirical approximation of the nonlinear operator mapping wave speed to solution, or boundary values for the Helmholtz equation on a bounded domain. The latter operator is commonly referred to as the forward operator in the study of inverse problems. Our methodology draws inspiration from transformers and techniques such as stochastic depth. Our experiments reveal certain surprises in the generalization and the relevance of introducing stochastic depth. Our NOs show superior performance as compared with standard NOs, not only for testing within the training distribution but also for out-of-distribution scenarios. To delve into this observation, we offer an in-depth analysis of the Rademacher complexity associated with our modified models and prove an upper bound tied to their stochastic depth that existing NOs do not satisfy. Furthermore, we obtain a novel out-of-distribution risk bound tailored to Gaussian measures on Banach spaces, again relating stochastic depth with the bound. We conclude by proposing a hypernetwork version of the subfamily of NOs as a surrogate model for the mentioned forward operator.




Has companion code repository: https://github.com/jalb-epsilon/fine-tuning-nos








This page was built for publication: Out-of-distributional risk bounds for neural operators with applications to the Helmholtz equation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6424544)