With the COVID-19 global pandemic, computerassisted diagnoses of medical images have gained a lot of attention, and robust methods of Semantic Segmentation of Computed Tomography (CT) turned highly desirable. Semantic Segmentation of CT is one of many research fields of automatic detection of Covid-19 and was widely explored since the Covid19 outbreak. In the robotic field, Semantic Segmentation of organs and CTs are widely used in robots developed for surgery tasks. As new methods and new datasets are proposed quickly, it becomes apparent the necessity of providing an extensive evaluation of those methods. To provide a standardized comparison of different architectures across multiple recently proposed datasets, we propose in this paper an extensive benchmark of multiple encoders and decoders with a total of 120 architectures evaluated in five datasets, with each dataset being validated through a five-fold cross-validation strategy, totaling 3.000 experiments. To the best of our knowledge, this is the largest evaluation in number of encoders, decoders, and datasets proposed in the field of Covid-19 CT segmentation.
翻译:随着COVID-19全球大流行病的发生,计算机辅助诊断医疗图像的工作引起了人们的极大关注,而且对计算表成像(CT)的精密分解方法也变得十分可取,CT的分解是Covid-19自动检测的许多研究领域之一,自Covid19爆发以来,已经广泛探讨。在机器人领域,器官和CT的分解被广泛用于为手术任务而开发的机器人中。随着新方法和新的数据集的迅速提出,显然有必要对这些方法进行广泛的评价。为了对最近提出的多个数据集的不同结构进行标准化比较,我们在本文件中提出了多个编码器和解析器的广泛基准,共有120个结构,在5个数据集中进行了评价,每个数据集都通过5倍交叉校验战略加以验证,总共进行了3 000个试验。据我们所知,这是对Covid-19CT分割领域提出的编码、解析器和数据集数量最多的评估。