While deep models have shown promising performance in medical image segmentation, they heavily rely on a large amount of well-annotated data, which is difficult to access, especially in clinical practice. On the other hand, high-accuracy deep models usually come in large model sizes, limiting their employment in real scenarios. In this work, we propose a novel asymmetric co-teacher framework, ACT-Net, to alleviate the burden on both expensive annotations and computational costs for semi-supervised knowledge distillation. We advance teacher-student learning with a co-teacher network to facilitate asymmetric knowledge distillation from large models to small ones by alternating student and teacher roles, obtaining tiny but accurate models for clinical employment. To verify the effectiveness of our ACT-Net, we employ the ACDC dataset for cardiac substructure segmentation in our experiments. Extensive experimental results demonstrate that ACT-Net outperforms other knowledge distillation methods and achieves lossless segmentation performance with 250x fewer parameters.
翻译:虽然深层模型在医学图像分解方面表现良好,但它们严重依赖大量有良好说明的数据,很难获取,特别是在临床实践方面。另一方面,高精度深层模型通常以大模型规模出现,在真实情况下限制了它们的就业。在这项工作中,我们提议建立一个新型的不对称共同教师框架,即ACT-Net,以减轻半监督知识蒸馏费用昂贵的说明和计算成本的负担。我们通过一个共同教师网络促进师生学习,通过交替学生和教师的角色,从大模型向小模型进行不对称知识蒸馏,获得微小但准确的临床就业模型。为了核实我们的ACT-Net的有效性,我们在实验中使用ACDC数据集进行心脏次结构分解。广泛的实验结果显示,ACT-Net比其他知识分解方法要好,并且以250x更少的参数实现无损分解功能。