Towards Stability of Autoregressive Neural Operators

From MaRDI portal
Publication:6440681

arXiv2306.10619MaRDI QIDQ6440681

Author name not available (Why is that?)

Publication date: 18 June 2023

Abstract: Neural operators have proven to be a promising approach for modeling spatiotemporal systems in the physical sciences. However, training these models for large systems can be quite challenging as they incur significant computational and memory expense -- these systems are often forced to rely on autoregressive time-stepping of the neural network to predict future temporal states. While this is effective in managing costs, it can lead to uncontrolled error growth over time and eventual instability. We analyze the sources of this autoregressive error growth using prototypical neural operator models for physical systems and explore ways to mitigate it. We introduce architectural and application-specific improvements that allow for careful control of instability-inducing operations within these models without inflating the compute/memory expense. We present results on several scientific systems that include Navier-Stokes fluid flow, rotating shallow water, and a high-resolution global weather forecasting system. We demonstrate that applying our design principles to prototypical neural networks leads to significantly lower errors in long-range forecasts with 800% longer forecasts without qualitative signs of divergence compared to the original models for these systems. We open-source our href{https://anonymous.4open.science/r/stabilizing_neural_operators-5774/}{code} for reproducibility.




Has companion code repository: https://github.com/mikemccabe210/stabilizing_neural_operators








This page was built for publication: Towards Stability of Autoregressive Neural Operators

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6440681)