思想来自于视觉机制,是对信息进行抽象的过程。

VIP内容

图池化是众多图神经网络(GNN)架构的核心组件。由于继承了传统的CNNs,大多数方法将图池化为一个聚类分配问题,将规则网格中的局部patch的思想扩展到图中。尽管广泛遵循了这种设计选择,但没有任何工作严格评估过它对GNNs成功的影响。我们以代表性的GNN为基础,并引入了一些变体,这些变体挑战了在补充图上使用随机化或聚类的局部保持表示的需要。引人注目的是,我们的实验表明,使用这些变体不会导致任何性能下降。为了理解这一现象,我们研究了卷积层和随后的池层之间的相互作用。我们证明了卷积在学习的表示法中起着主导作用。与通常的看法相反,局部池化不是GNNs在相关和广泛使用的基准测试中成功的原因。

成为VIP会员查看完整内容
0
40

最新内容

Large-scale trademark retrieval is an important content-based image retrieval task. A recent study shows that off-the-shelf deep features aggregated with Regional-Maximum Activation of Convolutions (R-MAC) achieve state-of-the-art results. However, R-MAC suffers in the presence of background clutter/trivial regions and scale variance, and discards important spatial information. We introduce three simple but effective modifications to R-MAC to overcome these drawbacks. First, we propose the use of both sum and max pooling to minimise the loss of spatial information. We also employ domain-specific unsupervised soft-attention to eliminate background clutter and unimportant regions. Finally, we add multi-resolution inputs to enhance the scale-invariance of R-MAC. We evaluate these three modifications on the million-scale METU dataset. Our results show that all modifications bring non-trivial improvements, and surpass previous state-of-the-art results.

0
0
下载
预览

最新论文

Large-scale trademark retrieval is an important content-based image retrieval task. A recent study shows that off-the-shelf deep features aggregated with Regional-Maximum Activation of Convolutions (R-MAC) achieve state-of-the-art results. However, R-MAC suffers in the presence of background clutter/trivial regions and scale variance, and discards important spatial information. We introduce three simple but effective modifications to R-MAC to overcome these drawbacks. First, we propose the use of both sum and max pooling to minimise the loss of spatial information. We also employ domain-specific unsupervised soft-attention to eliminate background clutter and unimportant regions. Finally, we add multi-resolution inputs to enhance the scale-invariance of R-MAC. We evaluate these three modifications on the million-scale METU dataset. Our results show that all modifications bring non-trivial improvements, and surpass previous state-of-the-art results.

0
0
下载
预览
Top