The video-based person re-identification (ReID) aims to identify the given pedestrian video sequence across multiple non-overlapping cameras. To aggregate the temporal and spatial features of the video samples, the graph neural networks (GNNs) are introduced. However, existing graph-based models, like STGCN, perform the \textit{mean}/\textit{max pooling} on node features to obtain the graph representation, which neglect the graph topology and node importance. In this paper, we propose the graph pooling network (GPNet) to learn the multi-granularity graph representation for the video retrieval, where the \textit{graph pooling layer} is implemented to downsample the graph. We first construct a multi-granular graph, whose node features denote image embedding learned by backbone, and edges are established between the temporal and Euclidean neighborhood nodes. We then implement multiple graph convolutional layers to perform the neighborhood aggregation on the graphs. To downsample the graph, we propose a multi-head full attention graph pooling (MHFAPool) layer, which integrates the advantages of existing node clustering and node selection pooling methods. Specifically, MHFAPool takes the main eigenvector of full attention matrix as the aggregation coefficients to involve the global graph information in each pooled nodes. Extensive experiments demonstrate that our GPNet achieves the competitive results on four widely-used datasets, i.e., MARS, DukeMTMC-VideoReID, iLIDS-VID and PRID-2011.
翻译:视频人重新定位( ReID) 的目的是在多个非重叠相机中识别给定的行人视频序列。 为了汇总视频样本的时间和空间特征, 引入了图形神经网络( GNNS ) 。 但是, 现有的基于图形的模型, 如 STGCN, 在节点特征上执行\ textit{ offin}/\ textit{max count}, 以获得图形代表, 从而忽略图形表层和节点的重要性 。 在本文中, 我们提议图形集合网络( GPNet Net ) 以学习视频检索的多色度图形代表, 在那里, 实施\ textitit{ graph 集合层 。 我们首先构建一个多色点模型模型, 其节点特征代表着骨干所学成的图像。 然后我们用多色图层图层来进行周边汇总。 向下调图图, 我们建议在视频检索中不使用多色图表, MIDRMMMD 数据库中, 将现有的 IMO 数据分组 整合到 。