Automated code generation allows for a separation between the development of a model, expressed via a domain specific language, and lower level implementation details. Algorithmic differentiation can be applied symbolically at the level of the domain specific language, and the code generator reused to implement code required for an adjoint calculation. However the adjoint calculations are complicated by the well-known problem of storing or recomputing the forward model data required by the adjoint, and different checkpointing strategies have been developed to tackle this problem. This article describes the application of checkpointing strategies to high-level algorithmic differentiation, applied to codes developed using automated code generation. Since the high-level approach provides a simplified view of the model itself, the data required to restart the forward and data required to advance the adjoint can be identified, and the difference between them leveraged to implement checkpointing strategies of improved performance.
翻译:暂无翻译