The annotation of domain experts is important for some medical applications where the objective ground truth is ambiguous to define, e.g., the rehabilitation for some chronic diseases, and the prescreening of some musculoskeletal abnormalities without further medical examinations. However, improper uses of the annotations may hinder developing reliable models. On one hand, forcing the use of a single ground truth generated from multiple annotations is less informative for the modeling. On the other hand, feeding the model with all the annotations without proper regularization is noisy given existing disagreements. For such issues, we propose a novel Learning to Agreement (Learn2Agree) framework to tackle the challenge of learning from multiple annotators without objective ground truth. The framework has two streams, with one stream fitting with the multiple annotators and the other stream learning agreement information between annotators. In particular, the agreement learning stream produces regularization information to the classifier stream, tuning its decision to be better in line with the agreement between annotators. The proposed method can be easily added to existing backbones, with experiments on two medical datasets showed better agreement levels with annotators.
翻译:对于一些医学应用,如果客观的地面真相模糊不清,无法界定某些慢性病的康复和预先筛选某些肌肉骨骼异常而不进行进一步的医学检查,则域专家的说明对于某些医学应用很重要。然而,对说明的不当使用可能妨碍开发可靠的模型。一方面,强迫使用从多注中产生的单一地面真相对于模型来说并不那么丰富。另一方面,由于现有的分歧,在没有适当规范的情况下以所有说明的方式向模型提供模型是吵闹的。对于这些问题,我们提议一个新的“学习协议(Learn2Agree)”框架,以应对从多个警告者那里学习而没有客观地面真相的挑战。该框架有两个流,一个流与多注解者相适应,另一个流学习协议信息则会妨碍制作可靠的模型。特别是,协议学习流会为分类者提供正规化信息,调整其决定,使之与说明者之间的协议更加一致。对于这些问题,提议的方法可以很容易地添加到现有的骨干上,在两个医学数据设置上进行的实验显示与注者之间有更好的协议级别。