We propose a novel domain adaptive action detection approach and a new adaptation protocol that leverages the recent advancements in image-level unsupervised domain adaptation (UDA) techniques and handle vagaries of instance-level video data. Self-training combined with cross-domain mixed sampling has shown remarkable performance gain in semantic segmentation in UDA (unsupervised domain adaptation) context. Motivated by this fact, we propose an approach for human action detection in videos that transfers knowledge from the source domain (annotated dataset) to the target domain (unannotated dataset) using mixed sampling and pseudo-label-based selftraining. The existing UDA techniques follow a ClassMix algorithm for semantic segmentation. However, simply adopting ClassMix for action detection does not work, mainly because these are two entirely different problems, i.e., pixel-label classification vs. instance-label detection. To tackle this, we propose a novel action instance mixed sampling technique that combines information across domains based on action instances instead of action classes. Moreover, we propose a new UDA training protocol that addresses the long-tail sample distribution and domain shift problem by using supervision from an auxiliary source domain (ASD). For the ASD, we propose a new action detection dataset with dense frame-level annotations. We name our proposed framework as domain-adaptive action instance mixing (DA-AIM). We demonstrate that DA-AIM consistently outperforms prior works on challenging domain adaptation benchmarks. The source code is available at https://github.com/wwwfan628/DA-AIM.
翻译:我们提出一个新的领域适应行动检测方法和新的适应协议,利用图像层面未经监督的域适应技术的最新进展,并处理实例层面视频数据的变化性。自我培训加上跨多域混合抽样在UDA(未经监督的域适应)背景下的语义分解方面显示出显著的性能收益。受这一事实的驱动,我们提出在视频中进行人类行动检测的方法,将知识从源域(附加说明的数据集)传输到目标域(una附加说明的数据集),使用混合抽样和假标签的自我培训。现有的UDA技术采用类Mix算法来进行语义层面的语义分解。然而,仅仅采用类Mix来进行行动检测并不起作用,主要是因为这两个问题完全不同,即像像像样标签分类和实例-标签检测。为了解决这个问题,我们提议了一种新型的混合取样技术,根据行动实例而不是行动源类别,将跨域的信息综合起来。此外,我们提出了一个新的UDA培训协议,针对长尾图像样本级的样本分布和局域域框架,我们用一个最新的数据格式来演示A 。