Convolutional Neural Networks (CNNs) are widely used in fault diagnosis of mechanical systems due to their powerful feature extraction and classification capabilities. However, the CNN is a typical black-box model, and the mechanism of CNN's decision-making are not clear, which limits its application in high-reliability-required fault diagnosis scenarios. To tackle this issue, we propose a novel interpretable neural network termed as Time-Frequency Network (TFN), where the physically meaningful time-frequency transform (TFT) method is embedded into the traditional convolutional layer as an adaptive preprocessing layer. This preprocessing layer named as time-frequency convolutional (TFconv) layer, is constrained by a well-designed kernel function to extract fault-related time-frequency information. It not only improves the diagnostic performance but also reveals the logical foundation of the CNN prediction in the frequency domain. Different TFT methods correspond to different kernel functions of the TFconv layer. In this study, four typical TFT methods are considered to formulate the TFNs and their effectiveness and interpretability are proved through three mechanical fault diagnosis experiments. Experimental results also show that the proposed TFconv layer can be easily generalized to other CNNs with different depths. The code of TFN is available on https://github.com/ChenQian0618/TFN.
翻译:由于具有强大的特征提取和分类能力,在机械系统的断层诊断中广泛使用革命神经网络(CNN),但是,CNN是一个典型的黑盒模型,CNN的决策机制并不明确,它限制了其在高可靠性要求的断层诊断情景中的应用。为了解决这一问题,我们提议了一个新的可解释的神经网络,称为时间-频率网络(TFN),其物理上有意义的时间-频率转换(TFT)方法被嵌入传统的富集层,作为适应性的预处理层。这个被称为时间频率共变层(TFT)的预处理层受到精心设计的内核功能的限制,无法提取与故障相关的时间频率信息。它不仅改进了诊断性,而且还揭示了CNN预测在频率领域的逻辑基础。不同的TFT方法与TFC层的不同内功能相对应。在本研究中,四种典型的TFT法方法被考虑设计出FMs,其有效性和可解释性通过三种机械性断层的GNFNC/TFTF的深度实验得到证明。TFTF的另外一层的实验结果可以证明。