Most recent approaches for online action detection tend to apply Recurrent Neural Network (RNN) to capture long-range temporal structure. However, RNN suffers from non-parallelism and gradient vanishing, hence it is hard to be optimized. In this paper, we propose a new encoder-decoder framework based on Transformers, named OadTR, to tackle these problems. The encoder attached with a task token aims to capture the relationships and global interactions between historical observations. The decoder extracts auxiliary information by aggregating anticipated future clip representations. Therefore, OadTR can recognize current actions by encoding historical information and predicting future context simultaneously. We extensively evaluate the proposed OadTR on three challenging datasets: HDD, TVSeries, and THUMOS14. The experimental results show that OadTR achieves higher training and inference speeds than current RNN based approaches, and significantly outperforms the state-of-the-art methods in terms of both mAP and mcAP. Code is available at https://github.com/wangxiang1230/OadTR.
翻译:最新在线行动探测方法通常采用经常性神经网络(RNN)来捕捉远程时间结构,然而,RNN受到非平行主义和梯度消失的影响,因此很难优化。在本文中,我们提议以名为OadTR的变异器为基础,建立一个新的编码器解码器框架来解决这些问题。带有任务标志的编码器旨在捕捉历史观测之间的关系和全球互动。解码器通过汇总预期的未来剪贴图来提取辅助信息。因此,OadTR可以同时对历史信息进行编码并预测未来环境来识别当前的行动。我们广泛评估了三个具有挑战性的数据集:HDD、TeVSeries和THUMOOS14。实验结果表明,OadTR的培训和推断速度高于目前的RNN方法,大大超出MAP和McAP的状态-艺术方法。代码见https://github.com/Wangxiang1230/OadTR。