Group equivariant convolutional neural networks (G-CNNs) have been successfully applied in geometric deep-learning. Typically, G-CNNs have the advantage over CNNs that they do not waste network capacity on training symmetries that should have been hard-coded in the network. The recently introduced framework of PDE-based G-CNNs (PDE-G-CNNs) generalize G-CNNs. PDE-G-CNNs have the core advantages that they simultaneously 1) reduce network complexity, 2) increase classification performance, 3) provide geometric network interpretability. Their implementations solely consist of linear and morphological convolutions with kernels. In this paper we show that the previously suggested approximative morphological kernels do not always approximate the exact kernels accurately. More specifically, depending on the spatial anisotropy of the Riemannian metric, we argue that one must resort to sub-Riemannian approximations. We solve this problem by providing a new approximative kernel that works regardless of the anisotropy. We provide new theorems with better error estimates of the approximative kernels, and prove that they all carry the same reflectional symmetries as the exact ones. We test the effectiveness of multiple approximative kernels within the PDE-G-CNN framework on two datasets, and observe an improvement with the new approximative kernel. We report that the PDE-G-CNNs again allow for a considerable reduction of network complexity while having a comparable or better performance than G-CNNs and CNNs on the two datasets. Moreover, PDE-G-CNNs have the advantage of better geometric interpretability over G-CNNs, as the morphological kernels are related to association fields from neurogeometry.
翻译:以 G- G- CNN 为基础的 GDE- G- G- CNN 框架( PDE- G- G- CNN ) 已经成功地应用于 几何深学习 。 一般来说, G- CNN 与CNN 相比具有优势,它们不会在网络中硬编码的对称培训中浪费网络能力。 最近推出的以 PDE 为基础的 G- CNN (PDE- G- G- CNNs) 框架将G- CNN 通用G- PDE- G- CNNNN 。 PDE- G- CNN 的核心优势在于它们同时降低网络的复杂复杂性, 提高分类性能, 3 提供对网络的几何性解释。 我们通过提供一个新的对CN 网络的对等性能解释性能来解决这个问题, 不管是线性和形态变异性能的, 也是一种可比较性能的模型。