Temporal action proposal generation (TAPG) is a challenging task, which requires localizing action intervals in an untrimmed video. Intuitively, we as humans, perceive an action through the interactions between actors, relevant objects, and the surrounding environment. Despite the significant progress of TAPG, a vast majority of existing methods ignore the aforementioned principle of the human perceiving process by applying a backbone network into a given video as a black-box. In this paper, we propose to model these interactions with a multi-modal representation network, namely, Actors-Objects-Environment Interaction Network (AOE-Net). Our AOE-Net consists of two modules, i.e., perception-based multi-modal representation (PMR) and boundary-matching module (BMM). Additionally, we introduce adaptive attention mechanism (AAM) in PMR to focus only on main actors (or relevant objects) and model the relationships among them. PMR module represents each video snippet by a visual-linguistic feature, in which main actors and surrounding environment are represented by visual information, whereas relevant objects are depicted by linguistic features through an image-text model. BMM module processes the sequence of visual-linguistic features as its input and generates action proposals. Comprehensive experiments and extensive ablation studies on ActivityNet-1.3 and THUMOS-14 datasets show that our proposed AOE-Net outperforms previous state-of-the-art methods with remarkable performance and generalization for both TAPG and temporal action detection. To prove the robustness and effectiveness of AOE-Net, we further conduct an ablation study on egocentric videos, i.e. EPIC-KITCHENS 100 dataset. Source code is available upon acceptance.
翻译:热力行动建议生成( TAPG) 是一项具有挑战性的任务, 需要将这些互动与多模式代表网络( AOE-14Net) 进行本地化行动间隔, 我们作为人类, 直观地通过行为者、 相关对象和周围环境之间的互动来看待一项行动。 尽管 TAPG 取得了显著进步, 绝大多数现有方法都忽略了上述人类感知过程的原则, 将主干网络作为黑箱应用到给定视频中。 在本文中, 我们提议将这些互动与多模式代表网络( 即 AOE-14Net ) 进行本地化行动。 我们的 AOE- Net 网络由两个模块组成, 即基于感知的多模式代表( PMR) 和边界匹配模块( BMM ) 。 我们引入了适应性关注机制( AAM ), 仅关注主要行为者( 或相关对象) 并模拟他们之间的关系。 PMRM 模块通过视觉信息代表了每个拟议的视频片片片段, 主要行为者和周围环境, 而相关的对象则被用语言学研究 语言特征描述为不同语言、 图像- 图像- 测试 A- 动作 模型 和图像- 动作 动作 模型- 演示- 。