Second derivatives of mathematical models for real-world phenomena are fundamental ingredients of a wide range of numerical simulation methods including parameter sensitivity analysis, uncertainty quantification, nonlinear optimization and model calibration. The evaluation of such Hessians often dominates the overall computational effort. Various combinatorial optimization problems can be formulated based on the highly desirable exploitation of the associativity of the chain rule of differential calculus. The fundamental Hessian Accumulation problem aiming to minimize the number of floating-point operations required for the computation of a Hessian turns out to be NP-complete. The restriction to suitable subspaces of the exponential search space proposed in this paper ensures computational tractability while yielding improvements by factors of ten and higher over standard approaches based on second-order tangent and adjoint algorithmic differentiation. Motivated by second-order parameter sensitivity analysis of surrogate numerical models obtained through training and pruning of deep neural networks this paper focusses on bracketing of dense Hessian chain products with the aim of minimizing the total number of floating-point operations to be performed. The results from a given dynamic programming algorithm for optimized bracketing of the underlying dense Jacobian chain product are used to reduce the computational cost of the corresponding Hessian. Minimal additional algorithmic effort is required.
翻译:用于现实世界现象的数学模型的第二衍生物是一系列广泛数字模拟方法的基本成分,包括参数敏感性分析、不确定性量化、非线性优化和模型校准。对此类赫西安人的评估往往在总体计算努力中占主导地位。根据对不同微积分链规则联系的高度适当利用,可以提出各种组合优化问题。赫西安的累积问题根本是为了最大限度地减少计算赫西安人所需要浮点操作的数量,结果最终是完成了NP。对本文提议的指数搜索空间的适当子空间的限制确保了计算可拉动性,同时根据基于二阶相色化和联合算法差异的标准方法的十倍以上因素产生了改进。受二阶参数参数对通过培训和运行深层神经网络获得的代金字塔数字模型的灵敏度分析的驱动,本文侧重于将稠密的赫西安人链产品分类,目的是尽量减少所要完成的浮点操作的总数。对准空间搜索空间空间进行限制,同时以十倍以上的十倍和更高标准方法产生改进,而基于二等相相的相偏差相偏差和联合算法分算法计算结果,因此,最优化的基的模型的峰级算算算算为最核心产品所需的最核心计算结果为最精确的基数。