We aim at the problem named One-Shot Unsupervised Domain Adaptation. Unlike traditional Unsupervised Domain Adaptation, it assumes that only one unlabeled target sample can be available when learning to adapt. This setting is realistic but more challenging, in which conventional adaptation approaches are prone to failure due to the scarce of unlabeled target data. To this end, we propose a novel Adversarial Style Mining approach, which combines the style transfer module and task-specific module into an adversarial manner. Specifically, the style transfer module iteratively searches for harder stylized images around the one-shot target sample according to the current learning state, leading the task model to explore the potential styles that are difficult to solve in the almost unseen target domain, thus boosting the adaptation performance in a data-scarce scenario. The adversarial learning framework makes the style transfer module and task-specific module benefit each other during the competition. Extensive experiments on both cross-domain classification and segmentation benchmarks verify that ASM achieves state-of-the-art adaptation performance under the challenging one-shot setting.
翻译:我们的目标是一个名为“单片不受监督的域域适应 ” 的问题。 与传统的不受监督的域适应不同, 它假定在学习适应时只能找到一个未贴标签的目标样本。 这个环境既现实又更具挑战性, 常规适应方法由于缺少未贴标签的目标数据而容易失败。 为此, 我们提出一个新的反向风格采掘方法, 将风格传输模块和任务特定模块合并成对抗性模式。 具体地说, 风格传输模块根据当前的学习状态, 迭接地搜索一发目标样本周围更坚固的星体化图像, 引导任务模型探索在几乎看不见的目标领域难以解决的潜在型态, 从而在数据记录情景下提高适应性能。 对抗性学习框架使风格传输模块和任务特定模块在竞争期间相互受益。 在跨场分类和分解基准上进行的广泛实验证实, 亚马逊公司在挑战性的一张照片下取得了最先进的适应性能。