In recent years, self-supervised learning has had significant success in applications involving computer vision and natural language processing. The type of pretext task is important to this boost in performance. One common pretext task is the measure of similarity and dissimilarity between pairs of images. In this scenario, the two images that make up the negative pair are visibly different to humans. However, in entomology, species are nearly indistinguishable and thus hard to differentiate. In this study, we explored the performance of a Siamese neural network using contrastive loss by learning to push apart embeddings of bumblebee species pair that are dissimilar, and pull together similar embeddings. Our experimental results show a 61% F1-score on zero-shot instances, a performance showing 11% improvement on samples of classes that share intersections with the training set.
翻译:近年来,自我监督的学习在涉及计算机视觉和自然语言处理的应用方面取得了显著成功。 借口任务类型对于提高绩效非常重要。 一个常见的借口任务就是测量成对图像的相似性和差异性。 在这个假设中, 构成负对的两种图像与人类明显不同。 但在昆虫学中, 物种几乎无法区分, 因而很难区分 。 在这项研究中, 我们探索了暹罗神经网络的性能, 利用对比性损失的方法, 通过学习将不同品种的大黄蜂物种的嵌入分离出来, 并结合类似的嵌入。 我们的实验结果显示, 零点例子中 F1 点显示有61%的F1点, 显示与训练组相交的班级样本有11%的改进。