In this paper, we propose TubeR: the first transformer based network for end-to-end action detection, with an encoder and decoder optimized for modeling action tubes with variable lengths and aspect ratios. TubeR does not rely on hand-designed tube structures, automatically links predicted action boxes over time and learns a set of tube queries related to actions. By learning action tube embeddings, TubeR predicts more precise action tubes with flexible spatial and temporal extents. Our experiments demonstrate TubeR achieves state-of-the-art among single-stream methods on UCF101-24 and J-HMDB. TubeR outperforms existing one-model methods on AVA and is even competitive with the two-model methods. Moreover, we observe TubeR has the potential on tracking actors with different actions, which will foster future research in long-range video understanding.
翻译:在本文中,我们提议TubeR:第一个基于终端到终端动作检测的变压器网络,其编码器和解码器在模拟动作管时最优化,其长度和方位比率各异。TubeR并不依赖手设计的管状结构,自动连接预测动作箱,并学习一系列与动作有关的管质查询。通过学习动作管嵌入,TubeR预测更精确的动作管,其空间和时间范围各有弹性。我们的实验显示TubeR在UCF101-24和J-HMDB的单流方法中达到了最新水平。TubeR在AVA上超越了现有的一种模式方法,甚至与两种模式方法具有竞争力。此外,我们观察TubeR具有以不同行动跟踪行为者的潜力,这将促进远程视频理解的未来研究。