Discovering interpretable patterns for classification of sequential data is of key importance for a variety of fields, ranging from genomics to fraud detection or more generally interpretable decision-making. In this paper, we propose a novel differentiable fully interpretable method to discover both local and global patterns (i.e. catching a relative or absolute temporal dependency) for rule-based binary classification. It consists of a convolutional binary neural network with an interpretable neural filter and a training strategy based on dynamically-enforced sparsity. We demonstrate the validity and usefulness of the approach on synthetic datasets and on an open-source peptides dataset. Key to this end-to-end differentiable method is that the expressive patterns used in the rules are learned alongside the rules themselves.
翻译:对一系列领域而言,从基因组学到欺诈探测或更一般地解释决策,发现连续数据分类的可解释模式至关重要,这些领域包括基因组学、欺诈探测或更普遍的可解释决策。在本文件中,我们提出了一种新的可充分解释的可解释方法,以发现基于规则的二进制分类的本地和全球模式(即获取相对或绝对时间依赖性),其中包括一个具有可解释神经过滤器的共进双神经网络,以及基于动态强迫的紧张性的培训战略。我们展示了合成数据集和开放源码浸泡物数据集方法的有效性和有用性。这种最终到最终可区别方法的关键是,规则中使用的表达模式与规则本身一起学习。