As intensities of MRI volumes are inconsistent across institutes, it is essential to extract universal features of multi-modal MRIs to precisely segment brain tumors. In this concept, we propose a volumetric vision transformer that follows two windowing strategies in attention for extracting fine features and local distributional smoothness (LDS) during model training inspired by virtual adversarial training (VAT) to make the model robust. We trained and evaluated network architecture on the FeTS Challenge 2022 dataset. Our performance on the online validation dataset is as follows: Dice Similarity Score of 81.71%, 91.38% and 85.40%; Hausdorff Distance (95%) of 14.81 mm, 3.93 mm, 11.18 mm for the enhancing tumor, whole tumor, and tumor core, respectively. Overall, the experimental results verify our method's effectiveness by yielding better performance in segmentation accuracy for each tumor sub-region. Our code implementation is publicly available : https://github.com/himashi92/vizviva_fets_2022
翻译:由于各研究所的磁共振成份强度不一致,因此有必要将多式光学和多式光学综合仪的通用性能提高到脑肿瘤的部位。在这个概念中,我们建议采用一个量体造影变压器,遵循两个窗口战略,在模拟对抗性虚拟培训(VAT)的启发下,在示范培训期间,吸引精细特征和局部分布性平滑性(LDS),使模型更加稳健。我们在FETS 挑战 2022 数据集上培训和评价了网络架构。我们在网上验证数据集上的绩效如下:Dice相似性评分81.71%、91.38%和85.40%;Hausdorff距离(95%)为14.81毫米、3.93毫米、11.18毫米,分别用于强化肿瘤、整个肿瘤和肿瘤核心。总体实验结果通过提高每个肿瘤子区域分解精度的性能来验证我们的方法的有效性。我们的代码实施公开提供:https://github.com/himashi92/vizviva_fets_2022