Robustness studies of black-box models is recognized as a necessary task for numerical models based on structural equations and predictive models learned from data. These studies must assess the model's robustness to possible misspecification of regarding its inputs (e.g., covariate shift). The study of black-box models, through the prism of uncertainty quantification (UQ), is often based on sensitivity analysis involving a probabilistic structure imposed on the inputs, while ML models are solely constructed from observed data. Our work aim at unifying the UQ and ML interpretability approaches, by providing relevant and easy-to-use tools for both paradigms. To provide a generic and understandable framework for robustness studies, we define perturbations of input information relying on quantile constraints and projections with respect to the Wasserstein distance between probability measures, while preserving their dependence structure. We show that this perturbation problem can be analytically solved. Ensuring regularity constraints by means of isotonic polynomial approximations leads to smoother perturbations, which can be more suitable in practice. Numerical experiments on real case studies, from the UQ and ML fields, highlight the computational feasibility of such studies and provide local and global insights on the robustness of black-box models to input perturbations.
翻译:黑箱模型的强力研究被公认为基于结构方程式和从数据中得出的预测模型的数值模型的一项必要任务。这些研究必须评估该模型的稳健性,以判断其投入(如共变变换)可能具体化。通过不确定性量化法(UQ)对黑箱模型的研究往往基于敏感性分析,涉及对投入强加的概率结构,而 ML 模型只能用观测到的数据来构建。我们的工作目标是统一UQ和ML可解释性方法,为这两种模式提供相关和易于使用的工具。为稳健性研究提供一个通用和易懂的框架。为了提供一个通用和易懂的框架,我们根据瓦瑟斯坦概率计量之间的微量限制和预测来界定输入信息输入信息,同时保持其依赖性结构。我们表明,这种扰动性问题可以分析解决。通过异调多边近似近似法确保规律性限制导致更平稳的扰动,这在实践中都更为合适。关于真实案例研究、从真实的精确度研究到真实的精确度模型的模拟,提供了从真实的精确度研究以及精确度的模型和精确度的模型,从真实的模型到真实的实地的推算。