A new unified video analytics framework (ER3) is proposed for complex event retrieval, recognition and recounting, based on the proposed video imprint representation, which exploits temporal correlations among image features across video frames. With the video imprint representation, it is convenient to reverse map back to both temporal and spatial locations in video frames, allowing for both key frame identification and key areas localization within each frame. In the proposed framework, a dedicated feature alignment module is incorporated for redundancy removal across frames to produce the tensor representation, i.e., the video imprint. Subsequently, the video imprint is individually fed into both a reasoning network and a feature aggregation module, for event recognition/recounting and event retrieval tasks, respectively. Thanks to its attention mechanism inspired by the memory networks used in language modeling, the proposed reasoning network is capable of simultaneous event category recognition and localization of the key pieces of evidence for event recounting. In addition, the latent structure in our reasoning network highlights the areas of the video imprint, which can be directly used for event recounting. With the event retrieval task, the compact video representation aggregated from the video imprint contributes to better retrieval results than existing state-of-the-art methods.
翻译:提议一个新的统一的视频分析框架(ER3),用于复杂的事件检索、识别和重计,其依据是拟议的视频印记演示,它利用了不同视频框架图像特征之间的时间相关性。随着视频印记演示,将地图倒回视频框中的时间和空间位置,允许每个框架的关键框架识别和关键区域本地化。在拟议框架中,将一个专门的特征调整模块纳入跨框架的冗余去除,以产生色标代表,即视频印记。随后,视频印记被单独输入一个推理网络和一个特征汇总模块,分别用于事件识别/重计和事件检索任务。由于视频印标注中所用记忆网络所激发的注意机制,拟议的推理网络能够同时进行事件分类识别和对事件记录关键证据的本地化。此外,我们推理网络的潜在结构突出了视频印记领域,可以直接用于事件重记。随着事件检索任务,从视频印记中汇总的缩缩放视频图像演示有助于更好地检索结果,而不是现有状态记录方法。