We propose a novel and flexible attention based U-Net architecture referred to as "Voxels-Intersecting Along Orthogonal Levels Attention U-Net" (viola-Unet), for intracranial hemorrhage (ICH) segmentation task in the INSTANCE 2022 Data Challenge on non-contrast computed tomography (CT). The performance of ICH segmentation was improved by efficiently incorporating fused spatially orthogonal and cross-channel features via our proposed Viola attention plugged into the U-Net decoding branches. The viola-Unet outperformed the strong baseline nnU-Net models during both 5-fold cross validation and online validation. Our solution was the winner of the challenge validation phase in terms of all four performance metrics (i.e., DSC, HD, NSD, and RVD). The code base, pretrained weights, and docker image of the viola-Unet AI tool are publicly available at \url{https://github.com/samleoqh/Viola-Unet}.
翻译:我们提出了一种新颖且灵活的基于注意力的 U-Net 架构,称为“Voxels-Intersecting Along Orthogonal Levels Attention U-Net”(viola-Unet),用于 INSTANCE 2022 数据挑战赛上非对比增强计算机断层扫描(CT)的头部血管内出血(ICH)分割任务。通过我们提出的 Viola 注意力插入 U-Net 解码分支,有效地将融合的空间正交和跨通道特征融合。viola-Unet 在 5 折交叉验证和在线验证期间都优于强基准 nnU-Net 模型。我们的解决方案在全部四个性能指标(即 DSC、HD、NSD 和 RVD)上赢得了挑战验证阶段的胜利。viola-Unet AI 工具的代码库、预训练权重和 Docker 图像都可在 \url{https://github.com/samleoqh/Viola-Unet} 上公开获取。