Many existing approaches for unsupervised domain adaptation (UDA) focus on adapting under only data distribution shift and offer limited success under additional cross-domain label distribution shift. Recent work based on self-training using target pseudo-labels has shown promise, but on challenging shifts pseudo-labels may be highly unreliable, and using them for self-training may cause error accumulation and domain misalignment. We propose Selective Entropy Optimization via Committee Consistency (SENTRY), a UDA algorithm that judges the reliability of a target instance based on its predictive consistency under a committee of random image transformations. Our algorithm then selectively minimizes predictive entropy to increase confidence on highly consistent target instances, while maximizing predictive entropy to reduce confidence on highly inconsistent ones. In combination with pseudo-label based approximate target class balancing, our approach leads to significant improvements over the state-of-the-art on 27/31 domain shifts from standard UDA benchmarks as well as benchmarks designed to stress-test adaptation under label distribution shift.
翻译:未受监督的域适应(UDA)的许多现有方法侧重于仅根据数据分配变化进行调整,在额外的跨域标签分布变化下则取得有限成功。最近利用目标假标签进行自我培训的工作已经显示出希望,但挑战性转移的假标签可能非常不可靠,而且用于自我培训的假标签可能会导致误差积累和域错配。我们提议通过委员会Consistance(Sentry)进行选择性的 Entropy 优化,这是一个UDA算法,根据随机图像转换委员会的预测一致性来判断目标实例的可靠性。我们的算法随后有选择地将预测性诱变最小化,以提高对高度一致的目标实例的信心,同时尽量扩大预测性诱变,降低对高度不一致目标类别的信心。与基于近似目标类别平衡的假标签相结合,我们的方法导致在27/31域从标准UDA基准转移后,在27/31域域变换时,在标准UDA基准下,以及在标签分布变换下为压力测试适应而设计的基准,大大改进了现状。