Consistent segmentation of COVID-19 patient's CT scans across multiple time points is essential to assess disease progression and response to therapy accurately. Existing automatic and interactive segmentation models for medical images only use data from a single time point (static). However, valuable segmentation information from previous time points is often not used to aid the segmentation of a patient's follow-up scans. Also, fully automatic segmentation techniques frequently produce results that would need further editing for clinical use. In this work, we propose a new single network model for interactive segmentation that fully utilizes all available past information to refine the segmentation of follow-up scans. In the first segmentation round, our model takes 3D volumes of medical images from two-time points (target and reference) as concatenated slices with the additional reference time point segmentation as a guide to segment the target scan. In subsequent segmentation refinement rounds, user feedback in the form of scribbles that correct the segmentation and the target's previous segmentation results are additionally fed into the model. This ensures that the segmentation information from previous refinement rounds is retained. Experimental results on our in-house multiclass longitudinal COVID-19 dataset show that the proposed model outperforms its static version and can assist in localizing COVID-19 infections in patient's follow-up scans.
翻译:对COVID-19病人的CT扫描进行多个时间点的一致分解对于准确评估疾病进展和治疗反应至关重要。现有的医疗图像自动和互动分解模型只使用单一时间点(静态)的数据。然而,以往时间点的宝贵分解信息往往没有用来帮助对病人的后续扫描进行分解。此外,完全自动分解技术经常产生需要进一步编辑供临床使用的结果。在这项工作中,我们提议一个新的互动分解网络模式,充分利用所有现有过去的信息来完善后续扫描的分解。在第一轮分解中,我们的模式将两点(目标和参考)的3D系列医疗图像作为配对的切片,作为附加的参考时间点分解作为目标扫描的指南。在随后的分解周期中,用户反馈的形式需要进一步编辑,供临床使用。在模型中,我们提出了一个互动分解的单一网络模式,充分利用了所有现有信息来改进后续扫描的分解信息。在第一轮分解中,我们的3D版医学图象图案从两个时间点(目标点和参考点)中取来,作为目标分解分析点的分解结果,用来在实验室前一至一至一至一至一至二进式的分解结果,用来在多级的实验室中显示的实验结果。我们提出的分解结果,可以用来在一至二进制成制成制成制成制成一至一至一至四进制制成的原样式的原样式的实验室,用来在多级式的内,用来在多式的原样式的原样式的C。