Domain generalization (DG) is a fundamental yet very challenging research topic in machine learning. The existing arts mainly focus on learning domain-invariant features with limited source domains in a static model. Unfortunately, there is a lack of training-free mechanism to adjust the model when generalized to the agnostic target domains. To tackle this problem, we develop a brand-new DG variant, namely Dynamic Domain Generalization (DDG), in which the model learns to twist the network parameters to adapt the data from different domains. Specifically, we leverage a meta-adjuster to twist the network parameters based on the static model with respect to different data from different domains. In this way, the static model is optimized to learn domain-shared features, while the meta-adjuster is designed to learn domain-specific features. To enable this process, DomainMix is exploited to simulate data from diverse domains during teaching the meta-adjuster to adapt to the upcoming agnostic target domains. This learning mechanism urges the model to generalize to different agnostic target domains via adjusting the model without training. Extensive experiments demonstrate the effectiveness of our proposed method. Code is available at: https://github.com/MetaVisionLab/DDG
翻译:常规化( DG) 是机器学习中一个基本但非常具有挑战性的研究课题。 现有艺术主要侧重于学习静态模型中具有有限源域的域变量特征。 不幸的是, 当模型被普遍推广到不可知的目标域时, 缺少对模型进行调整的无培训机制。 解决这个问题, 我们开发了一个全新的新DG变量, 即动态域通用( DDG), 模型在其中学会扭曲网络参数以适应不同域的数据。 具体地说, 我们利用一个元调整器来扭曲基于静态模型的网络参数的网络参数, 以适应不同域的不同数据。 这样, 静态模型被优化以学习共享域特性, 而元调整器的设计是为了学习特定域特性。 为使这一过程得以进行, DomainMix 被利用来模拟不同域的数据, 以适应即将出现的星系目标域。 这一学习机制敦促模型通过不经过培训而调整的模型将模型推广到不同的目标域。 大规模实验展示了我们提议的方法的有效性。 DomainMDDDR 。