Although deep learning models for chest X-ray interpretation are commonly trained on labels generated by automatic radiology report labelers, the impact of improvements in report labeling on the performance of chest X-ray classification models has not been systematically investigated. We first compare the CheXpert, CheXbert, and VisualCheXbert labelers on the task of extracting accurate chest X-ray image labels from radiology reports, reporting that the VisualCheXbert labeler outperforms the CheXpert and CheXbert labelers. Next, after training image classification models using labels generated from the different radiology report labelers on one of the largest datasets of chest X-rays, we show that an image classification model trained on labels from the VisualCheXbert labeler outperforms image classification models trained on labels from the CheXpert and CheXbert labelers. Our work suggests that recent improvements in radiology report labeling can translate to the development of higher performing chest X-ray classification models.
翻译:虽然对胸前X射线解释的深学习模式通常进行关于自动放射报告标签制作的标签的培训,但对于改进胸前X射线分类模型性能标签的报告标签的影响尚未进行系统调查。我们首先比较CheXpert、CheXbert和VisionChXbert标签,以完成从放射报告中提取准确胸前X射线图像标签的任务,并报告说VisionCheXbert标签优于CheXpert和CheXbert标签。接下来,在对图像分类模型进行培训后,利用不同放射报告标签在最大胸部X射线分类数据集中生成的标签,我们显示,对Viscxbert标签培训的图像分类模型优于CheXpert和CheXbert标签培训的图像分类模型。我们的工作表明,最近对放射报告标签的改进可以转化为更先进的胸部X射线分类模型的发展。