Convolutional Neural Networks (CNNs) with U-shaped architectures have dominated medical image segmentation, which is crucial for various clinical purposes. However, the inherent locality of convolution makes CNNs fail to fully exploit global context, essential for better recognition of some structures, e.g., brain lesions. Transformers have recently proven promising performance on vision tasks, including semantic segmentation, mainly due to their capability of modeling long-range dependencies. Nevertheless, the quadratic complexity of attention makes existing Transformer-based models use self-attention layers only after somehow reducing the image resolution, which limits the ability to capture global contexts present at higher resolutions. Therefore, this work introduces a family of models, dubbed Factorizer, which leverages the power of low-rank matrix factorization for constructing an end-to-end segmentation model. Specifically, we propose a linearly scalable approach to context modeling, formulating Nonnegative Matrix Factorization (NMF) as a differentiable layer integrated into a U-shaped architecture. The shifted window technique is also utilized in combination with NMF to effectively aggregate local information. Factorizers compete favorably with CNNs and Transformers in terms of accuracy, scalability, and interpretability, achieving state-of-the-art results on the BraTS dataset for brain tumor segmentation and ISLES'22 dataset for stroke lesion segmentation. Highly meaningful NMF components give an additional interpretability advantage to Factorizers over CNNs and Transformers. Moreover, our ablation studies reveal a distinctive feature of Factorizers that enables a significant speed-up in inference for a trained Factorizer without any extra steps and without sacrificing much accuracy. The code and models are publicly available at https://github.com/pashtari/factorizer.
翻译:具有U形结构的神经网络(CNNs)与U形结构的神经网络(CNNs)在医学图像分割方面占据了主导地位,这对临床的各种目的至关重要。然而,遗传的固有位置使得CNN无法充分利用全球背景,而这种背景对于更好地认识某些结构(如脑损伤)至关重要。 变形者最近证明在愿景任务(包括语义分割)上的表现很有希望,这主要是因为他们有能力模拟远程依赖性。然而,关注的四面形复杂性使得现有的基于变异器的模型在缩小图像分辨率(这限制了在更高分辨率上捕捉全球背景的能力)之后才会使用自我关注层。 因此,这项工作引入了一组模型, 调频化的Charterizer, 利用低位矩阵因子化的力量来构建端到端分解模型。 具体地说, 我们提出了一种直线性化的方法来进行背景建模, 制定Nnnenegenticrical 矩阵因子化(NMMF), 将不同的可获取的层层层结构, 也用来结合NMFMFPLS- realalalal- realialal- 和直径化, 和直观数据结构的精化结果的精化。