Multivariate techniques and machine learning models have found numerous applications in High Energy Physics (HEP) research over many years. In recent times, AI models based on deep neural networks are becoming increasingly popular for many of these applications. However, neural networks are regarded as black boxes- because of their high degree of complexity it is often quite difficult to quantitatively explain the output of a neural network by establishing a tractable input-output relationship and information propagation through the deep network layers. As explainable AI (xAI) methods are becoming more popular in recent years, we explore interpretability of AI models by examining an Interaction Network (IN) model designed to identify boosted $H\to b\bar{b}$ jets amid QCD background. We explore different quantitative methods to demonstrate how the classifier network makes its decision based on the inputs and how this information can be harnessed to reoptimize the model- making it simpler yet equally effective. We additionally illustrate the activity of hidden layers within the IN model as Neural Activation Pattern (NAP) diagrams. Our experiments suggest NAP diagrams reveal important information about how information is conveyed across the hidden layers of deep model. These insights can be useful to effective model reoptimization and hyperparameter tuning.
翻译:多年来,在高能物理(HEP)研究中,多种多变技术和机器学习模型发现许多应用在高能物理(HEP)研究中发现许多应用。近年来,基于深神经网络的AI模型对许多这些应用越来越受欢迎。然而,神经网络因其高度复杂而被视为黑盒,因此往往很难用数量来解释神经网络的输出,通过建立可移植的输入-输出关系和通过深网络层传播信息。随着可解释的AI(xAI)方法近年来越来越受欢迎,我们探索AI模型的可解释性,方法是通过研究一个互动网络模型(IN),旨在识别QCD背景中的提振 $H-to b\bar{b}$喷气机。我们探索了不同的定量方法,以展示分类网络如何根据投入作出决定,以及如何利用这些信息重新优化模型,使其更简单、同样有效。我们进一步说明在IN模型中隐藏的层层作为神经动力调节模型(NAPAP)图中的活动。我们提出的国家适应方案图示显示了如何将有用的信息传递到隐藏的深度透视层。