Digital histopathology slides are scanned and viewed under different magnifications and stored as images at different resolutions. Convolutional Neural Networks (CNNs) trained on such images at a given scale fail to generalise to those at different scales. This inability is often addressed by augmenting training data with re-scaled images, allowing a model with sufficient capacity to learn the requisite patterns. Alternatively, designing CNN filters to be scale-equivariant frees up model capacity to learn discriminative features. In this paper, we propose the Scale-Equivariant UNet (SEUNet) for image segmentation by building on scale-space theory. The SEUNet contains groups of filters that are linear combinations of Gaussian basis filters, whose scale parameters are trainable but constrained to span disjoint scales through the layers of the network. Extensive experiments on a nuclei segmentation dataset and a tissue type segmentation dataset demonstrate that our method outperforms other approaches, with much fewer trainable parameters.
翻译:数字病理学切片以不同的放大倍数扫描并查看,并以不同的分辨率存储为图像。在给定比例下训练的卷积神经网络(CNN)不能推广到不同比例的图像。这种无法推广的问题可以通过使用重新缩放的图像来增加训练数据,从而通过训练具有足够容量的模型来学习所需的模式。或者,通过设计CNN滤波器具有比例等变性,释放出模型容量来学习具有鉴别性的特征。在本文中,我们提出了基于比例空间理论的规模等变UNet(SEUNet)用于图像分割。SEUNet包含一些滤波器组,这些滤波器是高斯基础滤波器的线性组合,其比例参数是可训练的,但被限制在网络的不同尺度中。在核分割数据集和组织类型分割数据集上的大量实验表明,我们的方法优于其他方法,并具有更少的可训练参数。