We propose PLLay, a novel topological layer for general deep learning models based on persistence landscapes, in which we can efficiently exploit the underlying topological features of the input data structure. In this work, we show differentiability with respect to layer inputs, for a general persistent homology with arbitrary filtration. Thus, our proposed layer can be placed anywhere in the network and feed critical information on the topological features of input data into subsequent layers to improve the learnability of the networks toward a given task. A task-optimal structure of PLLay is learned during training via backpropagation, without requiring any input featurization or data preprocessing. We provide a novel adaptation for the DTM function-based filtration, and show that the proposed layer is robust against noise and outliers through a stability analysis. We demonstrate the effectiveness of our approach by classification experiments on various datasets.
翻译:我们建议PLLay,这是一个基于持久性地貌的普通深层学习模型的新奇的地形层,我们可以在其中有效利用输入数据结构的基本地形特征。在这项工作中,我们展示了在层投入方面的差异性,以形成具有任意过滤作用的一般持久性同质学。因此,我们提议的层可以在网络的任何地方放置,并将关于输入数据的地形特征的重要信息输入随后的层,以提高网络对特定任务的学习能力。在通过回溯性调整培训过程中,我们学习了PLLay的任务最佳结构,而不需要任何输入预处理。我们为基于 DTM 功能的过滤提供了新的适应性,并表明拟议的层通过稳定分析对噪音和外缘进行强力。我们通过对各种数据集进行分类实验来证明我们的方法的有效性。