On the implementation of checkpointing with high-level algorithmic differentiation
From MaRDI portal
Publication:6436838
arXiv2305.09568MaRDI QIDQ6436838
Author name not available (Why is that?)
Publication date: 16 May 2023
Abstract: Automated code generation allows for a separation between the development of a model, expressed via a domain specific language, and lower level implementation details. Algorithmic differentiation can be applied symbolically at the level of the domain specific language, and the code generator reused to implement code required for an adjoint calculation. However the adjoint calculations are complicated by the well-known problem of storing or recomputing the forward model data required by the adjoint, and different checkpointing strategies have been developed to tackle this problem. This article describes the application of checkpointing strategies to high-level algorithmic differentiation, applied to codes developed using automated code generation. Since the high-level approach provides a simplified view of the model itself, the data required to restart the forward and data required to advance the adjoint can be identified, and the difference between them leveraged to implement checkpointing strategies of improved performance.
Has companion code repository: https://github.com/jrmaddison/tlm_adjoint
This page was built for publication: On the implementation of checkpointing with high-level algorithmic differentiation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6436838)