Deep learning methods have shown great success in several domains as they process a large amount of data efficiently, capable of solving complex classification, forecast, segmentation, and other tasks. However, they come with the inherent drawback of inexplicability limiting their applicability and trustworthiness. Although there exists work addressing this perspective, most of the existing approaches are limited to the image modality due to the intuitive and prominent concepts. Conversely, the concepts in the time-series domain are more complex and non-comprehensive but these and an explanation for the network decision are pivotal in critical domains like medical, financial, or industry. Addressing the need for an explainable approach, we propose a novel interpretable network scheme, designed to inherently use an explainable reasoning process inspired by the human cognition without the need of additional post-hoc explainability methods. Therefore, class-specific patches are used as they cover local concepts relevant to the classification to reveal similarities with samples of the same class. In addition, we introduce a novel loss concerning interpretability and accuracy that constraints P2ExNet to provide viable explanations of the data including relevant patches, their position, class similarities, and comparison methods without compromising accuracy. Analysis of the results on eight publicly available time-series datasets reveals that P2ExNet reaches comparable performance when compared to its counterparts while inherently providing understandable and traceable decisions.
翻译:深层次的学习方法在一些领域取得了巨大成功,因为它们处理大量数据的效率较高,能够解决复杂的分类、预测、分解和其他任务;然而,由于不易理解性本身的缺点,限制了其适用性和可信赖性,因此,虽然有处理这一观点的工作,但大多数现有方法由于直观和突出的概念而限于图像模式。相反,时间序列领域的概念比较复杂,不全面,但这些概念和网络决定的解释在医疗、金融或工业等重要领域至关重要。为了解决对可解释方法的需要,我们提出一个新的可解释的网络计划,旨在利用人类认知所激发的可解释的推理过程,而不需要额外的事后解释方法。因此,使用针对具体等级的补补补,因为它们涵盖与分类有关的当地概念,以揭示与同一类样本的相似之处。此外,我们在解释性和准确性方面出现了新的损失,因为P2ExNet在提供可靠的数据解释性解释性解释性解释性解释性解释性,包括相关的补补补,其位置,类相似性2 以及可比较性网络分析,同时提供可理解性的数据的准确性,同时提供可比较性分析其八种数据的精确性。