Spatiotemporal action localization requires the incorporation of two sources of information into the designed architecture: (1) temporal information from the previous frames and (2) spatial information from the key frame. Current state-of-the-art approaches usually extract these information with separate networks and use an extra mechanism for fusion to get detections. In this work, we present YOWO, a unified CNN architecture for real-time spatiotemporal action localization in video streams. YOWO is a single-stage architecture with two branches to extract temporal and spatial information concurrently and predict bounding boxes and action probabilities directly from video clips in one evaluation. Since the whole architecture is unified, it can be optimized end-to-end. The YOWO architecture is fast providing 34 frames-per-second on 16-frames input clips and 62 frames-per-second on 8-frames input clips, which is currently the fastest state-of-the-art architecture on spatiotemporal action localization task. Remarkably, YOWO outperforms the previous state-of-the art results on J-HMDB-21 and UCF101-24 with an impressive improvement of ~3% and ~12%, respectively. Moreover, YOWO is the first and only single-stage architecture that provides competitive results on AVA dataset. We make our code and pretrained models publicly available.
翻译:现场行动本地化需要将两个信息源纳入设计架构:(1) 从先前的框架中提供的时间信息,(2) 从关键框架中提供空间信息。当前最先进的方法通常以不同的网络提取这些信息,并使用额外的集成机制进行检测。在这项工作中,我们介绍一个统一的有线电视新闻网结构,即视频流实时超时行动本地化的网络网络架构。YOWO是一个单阶段结构,有两个分支,可以同时提取时间和空间信息,并预测从一个评价的视频剪辑直接得出的捆绑框和行动概率。由于整个结构是统一的,可以优化终端到终端。YOWO结构快速提供每秒34个框架,16框架输入剪和每秒62个框架,用于视频流的8框架输入剪片。YOWO是一个目前最快的波边行动本地化模式中最先进的结构。值得注意的是,YOWO在J-HDB-21和UCF-O-21和UCF-O-O-3上,我们第一次有竞争力的数据改进。