The state of the art in video understanding suffers from two problems: (1) The major part of reasoning is performed locally in the video, therefore, it misses important relationships within actions that span several seconds. (2) While there are local methods with fast per-frame processing, the processing of the whole video is not efficient and hampers fast video retrieval or online classification of long-term activities. In this paper, we introduce a network architecture that takes long-term content into account and enables fast per-video processing at the same time. The architecture is based on merging long-term content already in the network rather than in a post-hoc fusion. Together with a sampling strategy, which exploits that neighboring frames are largely redundant, this yields high-quality action classification and video captioning at up to 230 videos per second, where each video can consist of a few hundred frames. The approach achieves competitive performance across all datasets while being 10x to 80x faster than state-of-the-art methods.
翻译:在视频理解方面,最先进的技术存在两个问题:(1) 推理的主要部分是在当地在视频中进行,因此,它忽略了在几秒钟内的行动中的重要关系。 (2) 虽然有当地方法,每个框架处理速度快,但整个视频的处理效率不高,妨碍视频快速检索或长期活动的在线分类。在本文中,我们引入了一个网络结构,既考虑到长期内容,又能同时使每个视频处理速度快。这一结构的基础是将网络中已经存在的长期内容而不是集成之后的长期内容合并在一起。再加上一个抽样战略,利用邻近框架基本上是多余的,这就产生了高质量的行动分类和视频说明,每秒有230个视频,每个视频可以由几百个框架组成。这个方法在所有数据集中实现竞争性性能,同时比最先进的方法快10x至80x。