The Majorana Demonstrator is a leading experiment searching for neutrinoless double-beta decay with high purity germanium detectors (HPGe). Machine learning provides a new way to maximize the amount of information provided by these detectors, but the data-driven nature makes it less interpretable compared to traditional analysis. An interpretability study reveals the machine's decision-making logic, allowing us to learn from the machine to feedback to the traditional analysis. In this work, we have presented the first machine learning analysis of the data from the Majorana Demonstrator; this is also the first interpretable machine learning analysis of any germanium detector experiment. Two gradient boosted decision tree models are trained to learn from the data, and a game-theory-based model interpretability study is conducted to understand the origin of the classification power. By learning from data, this analysis recognizes the correlations among reconstruction parameters to further enhance the background rejection performance. By learning from the machine, this analysis reveals the importance of new background categories to reciprocally benefit the standard Majorana analysis. This model is highly compatible with next-generation germanium detector experiments like LEGEND since it can be simultaneously trained on a large number of detectors.
翻译:Majorana 示范器是一个领先的实验,它利用高纯度细菌探测器(HPGe)寻找无中微子双贝衰变。机器学习为最大限度地增加这些探测器提供的信息量提供了一种新的方法,但数据驱动的性质使得它比传统分析更难解释。一个解释性研究揭示了机器的决策逻辑,使我们能够从机器中学习到对传统分析的反馈。在这项工作中,我们展示了对来自Majorana 演示器的数据进行的第一个机器学习分析;这也是对任何 ⁇ 探测器实验进行的第一个可解释的机器学习分析。两种梯度增强的决定树模型经过培训,从数据中学习,并进行了基于游戏理论的模型解释性研究,以了解分类能力的来源。通过从数据中学习,这一分析认识到重建参数之间的相互关系,以进一步增强背景拒绝性能。通过从机器中学习,这一分析揭示了新背景类别对于互惠地获益于标准Majorana 分析的重要性。这个模型与下一代子探测仪实验非常相容,因为可以同时对大型的探测器进行培训。