Diagnosis based on medical images, such as X-ray images, often involves manual annotation of anatomical keypoints. However, this process involves significant human efforts and can thus be a bottleneck in the diagnostic process. To fully automate this procedure, deep-learning-based methods have been widely proposed and have achieved high performance in detecting keypoints in medical images. However, these methods still have clinical limitations: accuracy cannot be guaranteed for all cases, and it is necessary for doctors to double-check all predictions of models. In response, we propose a novel deep neural network that, given an X-ray image, automatically detects and refines the anatomical keypoints through a user-interactive system in which doctors can fix mispredicted keypoints with fewer clicks than needed during manual revision. Using our own collected data and the publicly available AASCE dataset, we demonstrate the effectiveness of the proposed method in reducing the annotation costs via extensive quantitative and qualitative results. A demo video of our approach is available on our project webpage.
翻译:根据X射线图象等医学图象进行的诊断往往涉及对解剖关键点进行人工笔记,但是,这一过程涉及人的重大努力,因此可以成为诊断过程中的一个瓶颈。为了完全自动化这一程序,广泛提出了深学习方法,在探测医学图象中的关键点方面表现良好。然而,这些方法在临床上仍然有局限性:不能保证所有病例的准确性,医生必须重复检查模型的所有预测。作为回应,我们提议建立一个新型的深神经网络,根据X射线图象,通过用户互动系统自动检测和完善解剖关键点。在这种系统中,医生可以用比人工修改期间少的点击量来修补错误预测的关键点。我们利用自己收集的数据和公开提供的AASCE数据集,展示了拟议方法在通过广泛的定量和定性结果降低注解成本方面的有效性。我们的方法的演示视频可在我们的项目网页上查阅。