Some recent studies have described deep convolutional neural networks to diagnose breast cancer in mammograms with similar or even superior performance to that of human experts. One of the best techniques does two transfer learnings: the first uses a model trained on natural images to create a "patch classifier" that categorizes small subimages; the second uses the patch classifier to scan the whole mammogram and create the "single-view whole-image classifier". We propose to make a third transfer learning to obtain a "two-view classifier" to use the two mammographic views: bilateral craniocaudal and mediolateral oblique. We use EfficientNet as the basis of our model. We "end-to-end" train the entire system using CBIS-DDSM dataset. To ensure statistical robustness, we test our system twice using: (a) 5-fold cross validation; and (b) the original training/test division of the dataset. Our technique reached an AUC of 0.9344 using 5-fold cross validation (accuracy, sensitivity and specificity are 85.13% at the equal error rate point of ROC). Using the original dataset division, our technique achieved an AUC of 0.8483, as far as we know the highest reported AUC for this problem, although the subtle differences in the testing conditions of each work do not allow for an accurate comparison. The inference code and model are available at https://github.com/dpetrini/two-views-classifier
翻译:最近的一些研究描述了在乳房X射线中诊断乳腺癌的深层进化神经网络,其性能与人类专家相似甚至优异。一种最佳技术是两次传输学习:首先使用经过自然图像培训的模型来创建“分级器”,对小型子图像进行分类;第二用补丁分类器扫描整个乳房X光并创建“单视全图像分类器”。我们建议进行第三次传输学习,以获得“双视分级器”,以利用两种乳房X光学观点:双边红十字和中度偏斜。我们使用高效网络作为模型的基础。我们“端到端”用CBIS-DDCSM数据集来培训整个系统。为了确保统计稳健性,我们两次测试我们的系统使用:(a) 5倍交叉校准;以及(b) 原始培训/测试数据集。我们的技术在5倍交叉校准中达到了0.9344AUC(精确度、敏感度和特性为85-13%,而我们使用相同的精确度的ARC-CSLA的精确度差异是我们所报告的最精确度的A-CSLA-CA-CSLA的原始测试方法。我们报告的原始的精确度,这是我们所测测测的原始的精确度,我们所实现的精确度差异。