Video-based person re-identification (re-ID) aims at matching the same person across video clips. Efficiently exploiting multi-scale fine-grained features while building the structural interaction among them is pivotal for its success. In this paper, we propose a hybrid framework, Dense Interaction Learning (DenseIL), that takes the principal advantages of both CNN-based and Attention-based architectures to tackle video-based person re-ID difficulties. DenseIL contains a CNN encoder and a Dense Interaction (DI) decoder. The CNN encoder is responsible for efficiently extracting discriminative spatial features while the DI decoder is designed to densely model spatial-temporal inherent interaction across frames. Different from previous works, we additionally let the DI decoder densely attends to intermediate fine-grained CNN features and that naturally yields multi-grained spatial-temporal representation for each video clip. Moreover, we introduce Spatio-TEmporal Positional Embedding (STEP-Emb) into the DI decoder to investigate the positional relation among the spatial-temporal inputs. Our experiments consistently and significantly outperform all the state-of-the-art methods on multiple standard video-based re-ID datasets.
翻译:基于视频的人的重新定位(re-ID)旨在通过视频剪辑来匹配同一个人。有效利用多尺度的细微刻度特征,同时在他们之间建立结构互动关系,是成功的关键。在本文中,我们提议了一个混合框架,即 " 常温互动学习 " (DenseIL),它利用基于CNN和基于关注的架构的主要优势,解决基于视频的人的重新定位困难。DenseIL包含一个CNN编码器和一个高温互动(DI)解码器。CNN编码器负责有效提取歧视性空间特征,而DI解码器的设计是用于高密度模型空间-时空内在互动。与以往的作品不同,我们还让DI解码器高密度关注中精细的CNN特征,并自然产生每个视频剪辑的多分层空间-时空代表。此外,我们将基于Spatio-TEP-E-Emb(STEP-Emb)引入DI脱钩器,以便调查空间-时空标准数据输入中的位置关系。我们一贯进行的所有实验和状态数据转换方法。