Neural Networks are ubiquitous in high energy physics research. However, these highly nonlinear parameterized functions are treated as \textit{black boxes}- whose inner workings to convey information and build the desired input-output relationship are often intractable. Explainable AI (xAI) methods can be useful in determining a neural model's relationship with data toward making it \textit{interpretable} by establishing a quantitative and tractable relationship between the input and the model's output. In this letter of interest, we explore the potential of using xAI methods in the context of problems in high energy physics.
翻译:神经网络在高能物理研究中无处不在。 然而, 这些高度非线性参数化的功能被作为\ textit{ black cox} 处理, 这些功能在传递信息和建立所需的输入-输出关系的内在操作中往往是难以解决的。 可解释的 AI (xAI) 方法可以帮助确定神经模型与数据之间的关系, 通过在输入和模型输出之间建立定量和可移动的关系, 从而将其变成 \ textit{ 解释 } 。 在此意向书中, 我们探讨在高能物理存在问题时使用 xAI 方法的可能性 。