Automatic segmentation of organs-at-risk (OARs) in CT scans using convolutional neural networks (CNNs) is being introduced into the radiotherapy workflow. However, these segmentations still require manual editing and approval by clinicians prior to clinical use, which can be time consuming. The aim of this work was to develop a tool to automatically identify errors in 3D OAR segmentations without a ground truth. Our tool uses a novel architecture combining a CNN and graph neural network (GNN) to leverage the segmentation's appearance and shape. The proposed model is trained using self-supervised learning using a synthetically-generated dataset of segmentations of the parotid and with realistic contouring errors. The effectiveness of our model is assessed with ablation tests, evaluating the efficacy of different portions of the architecture as well as the use of transfer learning from an unsupervised pretext task. Our best performing model predicted errors on the parotid gland with a precision of 85.0% & 89.7% for internal and external errors respectively, and recall of 66.5% & 68.6%. This offline QA tool could be used in the clinical pathway, potentially decreasing the time clinicians spend correcting contours by detecting regions which require their attention. All our code is publicly available at https://github.com/rrr-uom-projects/contour_auto_QATool.
翻译:使用神经神经网络(CNNs)进行CT扫描中的风险器官自动分解(OARs)的自动分解(OARs)正在引入放射治疗工作流程中。然而,这些分解仍然需要临床医生在临床使用前人工编辑和批准,因为临床使用可能耗时。这项工作的目的是开发一个工具,在无地面真相的情况下自动识别3D OAR分解中的误差。我们的工具使用将CNN和图形神经神经网络(GNNN)结合起来的新颖结构来利用分解的外观和形状。拟议模型的培训使用自上式的自上型学习,使用合成生成的对帕托蒂分解数据集的数据集,并使用现实的调校差错误。我们模型的有效性是通过对ABL的测试进行评估,评估结构不同部分的功效,以及使用从不受监督的借口任务中学习的转移。我们最先进的模型预言到的差差,精确度是85.0%和89.7%的内外部误差,并忆及66.5%的内向和内向-内流的内流流数据。所有可用的修正工具都用于在公开检测的路径上的反向。