Temporal action localization aims to localize starting and ending time with action category. Limited by GPU memory, mainstream methods pre-extract features for each video. Therefore, feature quality determines the upper bound of detection performance. In this technical report, we explored classic convolution-based backbones and the recent surge of transformer-based backbones. We found that the transformer-based methods can achieve better classification performance than convolution-based, but they cannot generate accuracy action proposals. In addition, extracting features with larger frame resolution to reduce the loss of spatial information can also effectively improve the performance of temporal action localization. Finally, we achieve 42.42% in terms of mAP on validation set with a single SlowFast feature by a simple combination: BMN+TCANet, which is 1.87% higher than the result of 2020's multi-model ensemble. Finally, we achieve Rank 1st on the CVPR2021 HACS supervised Temporal Action Localization Challenge.
翻译:时间行动本地化的目的是将启动和结束时间与动作类别相匹配。 受 GPU 内存的限制, 主流方法对每部视频具有预抽取性能。 因此, 特性质量决定了检测性能的上限。 在本技术报告中, 我们探索了典型的以革命为主的骨干和最近的变压器骨干激增。 我们发现, 以变压器为基础的方法可以比以革命为主的功能实现更好的分类性能, 但是它们不能产生准确性行动建议。 此外, 提取具有更大框架分辨率的功能以减少空间信息损失也可以有效地改善时间行动本地化的绩效。 最后, 我们通过简单的组合( BMN+TCANet), 实现了42. 42%的MAAP, 使用单一的慢速功能, 即BMN+TCANet, 比2020年多模式共通性效果高1.87%。 最后, 我们在CVPR2021 HACS 监管的HCS Tempal Action Conditional Contal Contal Contal Confrition Challentitional Challen 挑战上取得了第1级。