We describe a DNN for fine-grained action classification and video captioning. It gives state-of-the-art performance on the challenging Something-Something dataset, with over 220, 000 videos and 174 fine-grained actions. Classification and captioning on this dataset are challenging because of the subtle differences between actions, the use of thousands of different objects, and the diversity of captions penned by crowd actors. The model architecture shares features for classification and captioning, and is trained end-to-end. It performs much better than the existing classification benchmark for Something-Something, with impressive fine-grained results, and it yields a strong baseline on the new Something-Something captioning task. Our results reveal that there is a strong correlation between the degree of detail in the task and the ability of the learned features to transfer to other tasks.
翻译:我们描述一个用于细微动作分类和视频字幕的 DNN 。 它在具有挑战性的东西- 某事数据集上提供最先进的性能, 超过220,000个视频和174个微小动作。 由于行动、 使用数千个不同对象和人群演员所填字幕的多样性之间的微妙差异, 对该数据集的分类和字幕的描述具有挑战性。 模型架构共享分类和字幕的特征, 并经过培训, 最终到终端。 它比某些事情- 某事的现有分类基准要好得多, 并具有令人印象深刻的微小效果, 它为新的某些事情- 说明任务提供了强有力的基线。 我们的结果显示,任务的详细程度与学习特征转移到其他任务的能力之间有着很强的关联。