Automatic differentiation (AD) aims to compute derivatives of user-defined functions, but in Turing-complete languages, this simple specification does not fully capture AD's behavior: AD sometimes disagrees with the true derivative of a differentiable program, and when AD is applied to non-differentiable or effectful programs, it is unclear what guarantees (if any) hold of the resulting code. We study an expressive differentiable programming language, with piecewise-analytic primitives, higher-order functions, and general recursion. Our main result is that even in this general setting, a version of Lee et al. [2020]'s correctness theorem (originally proven for a first-order language without partiality or recursion) holds: all programs denote so-called $\omega$PAP functions, and AD computes correct intensional derivatives of them. Mazza and Pagani [2021]'s recent theorem, that AD disagrees with the true derivative of a differentiable recursive program at a measure-zero set of inputs, can be derived as a straightforward corollary of this fact. We also apply the framework to study probabilistic programs, and recover a recent result from Mak et al. [2021] via a novel denotational argument.
翻译:自动区分( AD) 旨在计算用户定义函数的衍生物, 但是在图灵- 完整的语言中, 这个简单的规格并不能完全捕捉AD的行为: AD有时不同意一个不同程序的真正衍生物, 当AD应用到非差别化或效果化的程序时, 并不清楚AD持有的源代码是什么( 如果有的话 ) 。 我们研究一种表达式不同的编程语言, 使用片段分析原始、 较高顺序函数和一般循环。 我们的主要结果是, 即使在这个总体环境中, 李等人( Lee et al. [2020] 的正确性理论版本( 最初被证明为没有偏向性或复现性的第一阶语言) 也可以作为这个事实的直截了当的必然结果产生: 所有方案都指所谓的$\ omega$PAP 函数, 而 ADDB 包含正确的强化衍生物。 Mazza 和 Pagani [ 2021] 最近的一些理论, ad 与一个可辨别的重复性程序的真实衍生物, 在一个计量- 零的输入组中, 。