The main progress for action segmentation comes from densely-annotated data for fully-supervised learning. Since manual annotation for frame-level actions is time-consuming and challenging, we propose to exploit auxiliary unlabeled videos, which are much easier to obtain, by shaping this problem as a domain adaptation (DA) problem. Although various DA techniques have been proposed in recent years, most of them have been developed only for the spatial direction. Therefore, we propose Mixed Temporal Domain Adaptation (MTDA) to jointly align frame- and video-level embedded feature spaces across domains, and further integrate with the domain attention mechanism to focus on aligning the frame-level features with higher domain discrepancy, leading to more effective domain adaptation. Finally, we evaluate our proposed methods on three challenging datasets (GTEA, 50Salads, and Breakfast), and validate that MTDA outperforms the current state-of-the-art methods on all three datasets by large margins (e.g. 6.4% gain on F1@50 and 6.8% gain on the edit score for GTEA).


翻译:行动分解的主要进展来自用于完全监督的学习的密集加注数据。 由于框架一级行动的人工说明耗费时间且具有挑战性,我们提议利用辅助性无标签视频,通过将这一问题作为一个领域适应(DA)问题来形成,这些视频更容易获得。虽然近年来提出了各种指定国家技术,但大多数都是为空间方向而开发的。因此,我们提议混合时空域适应(MTDA)联合调整跨域的框架和视频级嵌入功能空间,并进一步与域关注机制整合,侧重于使框架一级特征与更高域差异保持一致,从而更有效地进行域适应。最后,我们评估了我们提议的关于三个挑战性数据集的方法(GTEA、50Salads和Mreatef),并证实MTDA通过大利润率(例如F1@50的6.4%收益和GTEA的编辑得分6.8%)。

0
下载
关闭预览

相关内容

【Manning新书】C++并行实战,592页pdf,C++ Concurrency in Action
Transferring Knowledge across Learning Processes
CreateAMind
27+阅读 · 2019年5月18日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
Disentangled的假设的探讨
CreateAMind
9+阅读 · 2018年12月10日
迁移学习之Domain Adaptation
全球人工智能
18+阅读 · 2018年4月11日
Auto-Encoding GAN
CreateAMind
7+阅读 · 2017年8月4日
VIP会员
相关VIP内容
【Manning新书】C++并行实战,592页pdf,C++ Concurrency in Action
Top
微信扫码咨询专知VIP会员