The main progress for action segmentation comes from densely-annotated data for fully-supervised learning. Since manual annotation for frame-level actions is time-consuming and challenging, we propose to exploit auxiliary unlabeled videos, which are much easier to obtain, by shaping this problem as a domain adaptation (DA) problem. Although various DA techniques have been proposed in recent years, most of them have been developed only for the spatial direction. Therefore, we propose Mixed Temporal Domain Adaptation (MTDA) to jointly align frame- and video-level embedded feature spaces across domains, and further integrate with the domain attention mechanism to focus on aligning the frame-level features with higher domain discrepancy, leading to more effective domain adaptation. Finally, we evaluate our proposed methods on three challenging datasets (GTEA, 50Salads, and Breakfast), and validate that MTDA outperforms the current state-of-the-art methods on all three datasets by large margins (e.g. 6.4% gain on F1@50 and 6.8% gain on the edit score for GTEA).
翻译:行动分解的主要进展来自用于完全监督的学习的密集加注数据。 由于框架一级行动的人工说明耗费时间且具有挑战性,我们提议利用辅助性无标签视频,通过将这一问题作为一个领域适应(DA)问题来形成,这些视频更容易获得。虽然近年来提出了各种指定国家技术,但大多数都是为空间方向而开发的。因此,我们提议混合时空域适应(MTDA)联合调整跨域的框架和视频级嵌入功能空间,并进一步与域关注机制整合,侧重于使框架一级特征与更高域差异保持一致,从而更有效地进行域适应。最后,我们评估了我们提议的关于三个挑战性数据集的方法(GTEA、50Salads和Mreatef),并证实MTDA通过大利润率(例如F1@50的6.4%收益和GTEA的编辑得分6.8%)。