Not all supervised learning problems are described by a pair of a fixed-size input tensor and a label. In some cases, especially in medical image analysis, a label corresponds to a bag of instances (e.g. image patches), and to classify such bag, aggregation of information from all of the instances is needed. There have been several attempts to create a model working with a bag of instances, however, they are assuming that there are no dependencies within the bag and the label is connected to at least one instance. In this work, we introduce Self-Attention Attention-based MIL Pooling (SA-AbMILP) aggregation operation to account for the dependencies between instances. We conduct several experiments on MNIST, histological, microbiological, and retinal databases to show that SA-AbMILP performs better than other models. Additionally, we investigate kernel variations of Self-Attention and their influence on the results.
翻译:并非所有受监督的学习问题都由一对固定尺寸的输入压力和标签来描述,在某些情况下,特别是在医学图像分析中,标签与一包案例(例如图像补丁)相对应,并且为了对此类案例进行分类,需要将所有案例的信息汇总起来。虽然曾几次尝试建立一个模型,与一包案例一起工作,但是,它们假设包内没有依赖性,标签至少与一个案例有关。在这项工作中,我们引入了以自我注意为基础的MIL集合(SA-AbMILP)集成操作,以说明两种案例之间的依赖性。我们在MNIST、组织、组织学、微生物和视网数据库上进行了几次实验,以表明SA-AbMILP比其他模型表现得更好。此外,我们调查自我注意的内圈变化及其对结果的影响。