Diabetic retinopathy (DR) is one of the leading causes of blindness. However, no specific symptoms of early DR lead to a delayed diagnosis, which results in disease progression in patients. To determine the disease severity levels, ophthalmologists need to focus on the discriminative parts of the fundus images. In recent years, deep learning has achieved great success in medical image analysis. However, most works directly employ algorithms based on convolutional neural networks (CNNs), which ignore the fact that the difference among classes is subtle and gradual. Hence, we consider automatic image grading of DR as a fine-grained classification task, and construct a bilinear model to identify the pathologically discriminative areas. In order to leverage the ordinal information among classes, we use an ordinal regression method to obtain the soft labels. In addition, other than only using a categorical loss to train our network, we also introduce the metric loss to learn a more discriminative feature space. Experimental results demonstrate the superior performance of the proposed method on two public IDRiD and DeepDR datasets.
翻译:心电图病(DR)是失明的主要原因之一。然而,早期DR没有具体症状导致诊断延迟,导致病人疾病增加。为了确定疾病严重程度,眼科医生需要关注Fundus图像的偏向性部分。近年来,深层次学习在医学图像分析方面取得了巨大成功。然而,大多数工作直接采用基于共生神经网络的算法,忽视了不同等级之间的差别是微妙和渐进的这一事实。因此,我们认为DR的自动图像分级是一项细微的分类任务,并建立一个双线模型,以确定病理上受歧视的领域。为了利用各类别之间的正统信息,我们使用一种正反回归方法来获取软标签。此外,除了使用绝对损失来培训我们的网络外,我们还引入了衡量损失的方法,以学习一个更具有歧视性的特征空间。实验结果表明,拟议的方法在两个公开的IDRID和DeepDD数据系统上表现优异。