Videos are more well-organized curated data sources for visual concept learning than images. Unlike the 2-dimensional images which only involve the spatial information, the additional temporal dimension bridges and synchronizes multiple modalities. However, in most video detection benchmarks, these additional modalities are not fully utilized. For example, EPIC Kitchens is the largest dataset in first-person (egocentric) vision, yet it still relies on crowdsourced information to refine the action boundaries to provide instance-level action annotations. We explored how to eliminate the expensive annotations in video detection data which provide refined boundaries. We propose a model to learn from the narration supervision and utilize multimodal features, including RGB, motion flow, and ambient sound. Our model learns to attend to the frames related to the narration label while suppressing the irrelevant frames from being used. Our experiments show that noisy audio narration suffices to learn a good action detection model, thus reducing annotation expenses.
翻译:视频是用于视觉概念学习比图像更有条理的整理数据来源。 与仅涉及空间信息的二维图像不同, 额外的时间维度桥梁和同步的多种模式。 但是, 在大多数视频检测基准中,这些额外模式没有得到充分利用。 例如, EPIC Kitchens 是第一人( 以地球为中心的) 视觉中最大的数据集, 但是它仍然依靠众源信息来改进行动界限, 以提供实例一级的行动说明。 我们探索了如何消除视频检测数据中昂贵的注释, 以提供精细的边界。 我们提出了一个模型, 从解说监督中学习, 并使用多式特征, 包括 RGB、 运动流和环境声音。 我们的模型学会了使用与解说标签相关的框架, 同时抑制使用不相干的框架。 我们的实验显示, 吵闹的音解说足以学习良好的动作探测模型, 从而减少批注费用 。