Speeding up and reducing memory usage for scientific machine learning via mixed precision
From MaRDI portal
Publication:6566075
DOI10.1016/j.cma.2024.117093MaRDI QIDQ6566075
Joel Hayford, Jacob Goldman-Wetzler, Lu Lu, Eric H. Wang
Publication date: 3 July 2024
Published in: Computer Methods in Applied Mechanics and Engineering (Search for Journal in Brave)
computational efficiencypartial differential equationsmixed precisionphysics-informed neural networksscientific machine learningdeep operator networks
Artificial neural networks and deep learning (68T07) Roundoff error (65G50) Parallel numerical computation (65Y05)
Cites Work
- Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification
- Neural operator prediction of linear instability waves in high-speed boundary layers
- A comprehensive and fair comparison of two neural operators (with practical extensions) based on FAIR data
- Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems
- Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data
- Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems
- Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
- A comprehensive study of non-adaptive and residual-based adaptive sampling for physics-informed neural networks
- DeepXDE: A Deep Learning Library for Solving Differential Equations
- Physics-Informed Neural Networks with Hard Constraints for Inverse Design
- fPINNs: Fractional Physics-Informed Neural Networks
- Fourier-DeepONet: Fourier-enhanced deep operator networks for full waveform inversion with improved accuracy, generalizability, and robustness
Related Items (1)
This page was built for publication: Speeding up and reducing memory usage for scientific machine learning via mixed precision