In this study, we propose a tailored DL framework for patient-specific performance that leverages the behavior of a model intentionally overfitted to a patient-specific training dataset augmented from the prior information available in an ART workflow - an approach we term Intentional Deep Overfit Learning (IDOL). Implementing the IDOL framework in any task in radiotherapy consists of two training stages: 1) training a generalized model with a diverse training dataset of N patients, just as in the conventional DL approach, and 2) intentionally overfitting this general model to a small training dataset-specific the patient of interest (N+1) generated through perturbations and augmentations of the available task- and patient-specific prior information to establish a personalized IDOL model. The IDOL framework itself is task-agnostic and is thus widely applicable to many components of the ART workflow, three of which we use as a proof of concept here: the auto-contouring task on re-planning CTs for traditional ART, the MRI super-resolution (SR) task for MRI-guided ART, and the synthetic CT (sCT) reconstruction task for MRI-only ART. In the re-planning CT auto-contouring task, the accuracy measured by the Dice similarity coefficient improves from 0.847 with the general model to 0.935 by adopting the IDOL model. In the case of MRI SR, the mean absolute error (MAE) is improved by 40% using the IDOL framework over the conventional model. Finally, in the sCT reconstruction task, the MAE is reduced from 68 to 22 HU by utilizing the IDOL framework.
翻译:在这项研究中,我们提出了一个针对病人具体表现的定制DL框架,利用从ART工作流程(我们称为 " 深超学习 " (IDOL))的任何任务中实施IDOL框架,包括两个培训阶段:1)培训一个通用模式,有各种培训的N病人数据集,就像传统DL方法一样;2)有意将这一通用模式与一个小型培训数据集(N+1)相匹配,该模型被故意地过度适应了从ART工作流程(ART)中现有常规任务和病人特定信息的扰动和增强所生成的特定病人培训数据集,以建立个性化的IDOL模型(IDOL)。 IDOL框架本身是任务性化的,因此广泛适用于ART工作流程的许多组成部分,我们在这里使用其中的三个概念证明:传统ART再规划CT的自动连续任务,MRI超导的MRI-CT任务,以及综合CT(S-C-C-C-C-C-Cl)先前信息,通过MA总模型,通过MRIM-MIL的精确度任务,通过MA-MA的自动修正,通过MRI-MA总任务结构,通过MA-MA的S-MIL)的S-S-IML 任务重组,通过S-S-S-S-I-IML 改进了ML-IL-I-I-I-I-I-I-IF Ax 任务重建任务。