Over the past few years, a significant progress has been made in deep convolutional neural networks (CNNs)-based image recognition. This is mainly due to the strong ability of such networks in mining discriminative object pose and parts information from texture and shape. This is often inappropriate for fine-grained visual classification (FGVC) since it exhibits high intra-class and low inter-class variances due to occlusions, deformation, illuminations, etc. Thus, an expressive feature representation describing global structural information is a key to characterize an object/ scene. To this end, we propose a method that effectively captures subtle changes by aggregating context-aware features from most relevant image-regions and their importance in discriminating fine-grained categories avoiding the bounding-box and/or distinguishable part annotations. Our approach is inspired by the recent advancement in self-attention and graph neural networks (GNNs) approaches to include a simple yet effective relation-aware feature transformation and its refinement using a context-aware attention mechanism to boost the discriminability of the transformed feature in an end-to-end learning process. Our model is evaluated on eight benchmark datasets consisting of fine-grained objects and human-object interactions. It outperforms the state-of-the-art approaches by a significant margin in recognition accuracy.
翻译:过去几年来,在深层神经神经网络(CNNs)基于深层神经网络(CNNs)的图像识别方面取得了显著进展,这主要是因为这些网络在从质和形状中挖掘具有歧视性的物体的外形和部件信息方面具有很强的能力,这往往不适合细微的视觉分类(FGVC),因为由于隔离、变形、光化等原因,该分类的阶级内部差异和阶级间差异很大,而且较低。因此,描述全球结构信息的直观特征描述是确定一个对象/场景的关键。为此,我们提出一种方法,通过将大多数相关图像区域的环境认知特征及其在避免约束框和(或)可辨别部分说明的细微类别中的重要性汇集起来,从而有效地捕捉微妙的变化。我们的方法是,最近自留和图形神经网络(GNNNS)的进步,即采用简单而有效的关联特征转换和完善的方法,利用一个有背景意识的注意机制,通过最终认识机制,促进在最终定位和人类学习过程中的精确度比值模型,从而评估了该模型。