Video-based Unsupervised Domain Adaptation (VUDA) methods improve the robustness of video models, enabling them to be applied to action recognition tasks across different environments. However, these methods require constant access to source data during the adaptation process. Yet in many real-world applications, subjects and scenes in the source video domain should be irrelevant to those in the target video domain. With the increasing emphasis on data privacy, such methods that require source data access would raise serious privacy issues. Therefore, to cope with such concern, a more practical domain adaptation scenario is formulated as the Source-Free Video-based Domain Adaptation (SFVDA). Though there are a few methods for Source-Free Domain Adaptation (SFDA) on image data, these methods yield degenerating performance in SFVDA due to the multi-modality nature of videos, with the existence of additional temporal features. In this paper, we propose a novel Attentive Temporal Consistent Network (ATCoN) to address SFVDA by learning temporal consistency, guaranteed by two novel consistency objectives, namely feature consistency and source prediction consistency, performed across local temporal features. ATCoN further constructs effective overall temporal features by attending to local temporal features based on prediction confidence. Empirical results demonstrate the state-of-the-art performance of ATCoN across various cross-domain action recognition benchmarks.
翻译:以视频为基础的非监督域适应(VUDA)方法提高了视频模型的稳健性,使其能够适用于不同环境的行动识别任务;然而,这些方法要求在适应过程中不断访问源数据;然而,在许多现实应用中,源视频域的主题和场景应当与目标视频域的人群无关;随着对数据隐私的日益强调,这种需要获取源数据的方法将造成严重的隐私问题;因此,为了应对这种关切,制定了更加实用的域适应设想方案,作为基于源的无视频域适应(SFVDA) 。虽然在图像数据方面有一些无源域适应(SFDA)方法,但这些方法由于视频的多时性性质,在源域域域域应用中,源域域域应用域应用领域的主题和场景应当与目标视频域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域域,即地域系一致性和源源域域域域域域域域域内预测一致性。