Temporal action detection aims to locate the boundaries of action in the video. The current method based on boundary matching enumerates and calculates all possible boundary matchings to generate proposals. However, these methods neglect the long-range context aggregation in boundary prediction. At the same time, due to the similar semantics of adjacent matchings, local semantic aggregation of densely-generated matchings cannot improve semantic richness and discrimination. In this paper, we propose the end-to-end proposal generation method named Dual Context Aggregation Network (DCAN) to aggregate context on two levels, namely, boundary level and proposal level, for generating high-quality action proposals, thereby improving the performance of temporal action detection. Specifically, we design the Multi-Path Temporal Context Aggregation (MTCA) to achieve smooth context aggregation on boundary level and precise evaluation of boundaries. For matching evaluation, Coarse-to-fine Matching (CFM) is designed to aggregate context on the proposal level and refine the matching map from coarse to fine. We conduct extensive experiments on ActivityNet v1.3 and THUMOS-14. DCAN obtains an average mAP of 35.39% on ActivityNet v1.3 and reaches mAP 54.14% at IoU@0.5 on THUMOS-14, which demonstrates DCAN can generate high-quality proposals and achieve state-of-the-art performance. We release the code at https://github.com/cg1177/DCAN.
翻译:在视频中,当前基于边界对接的罗列和计算所有可能的边界对接以产生建议的方法,但是这些方法忽略了边界预测中的长距离背景汇总;同时,由于相邻匹配的语义相似,对密集生成的匹配进行本地语义汇总无法改善语义丰富和差别;在本文件中,我们提议以边界对端提案生成方法命名双重环境聚合网络(DCAN),在两个级别,即边界水平和提议水平上汇总背景,以产生高质量的行动提案,从而改进时间行动探测的性能。具体地说,我们设计多语言时间环境聚合(MTCA),以实现边界水平的平稳背景汇总和准确的边界评估。为了匹配评估,COarse-tofine匹配(CMFM)旨在将建议级别集中起来,并将匹配图从comalal-og-glogation(DCAN-GAN1.3和THUOS-14级)进行广泛的实验。DCAN在35-MA-MA/MA+% 上取得平均的版本。