Test-time adaptation harnesses test inputs to improve the accuracy of a model trained on source data when tested on shifted target data. Existing methods update the source model by (re-)training on each target domain. While effective, re-training is sensitive to the amount and order of the data and the hyperparameters for optimization. We instead update the target data, by projecting all test inputs toward the source domain with a generative diffusion model. Our diffusion-driven adaptation method, DDA, shares its models for classification and generation across all domains. Both models are trained on the source domain, then fixed during testing. We augment diffusion with image guidance and self-ensembling to automatically decide how much to adapt. Input adaptation by DDA is more robust than prior model adaptation approaches across a variety of corruptions, architectures, and data regimes on the ImageNet-C benchmark. With its input-wise updates, DDA succeeds where model adaptation degrades on too little data in small batches, dependent data in non-uniform order, or mixed data with multiple corruptions.
翻译:试验时间适应利用测试输入的测试投入,在根据转移的目标数据进行测试时,提高经过源数据培训的模型的准确性; 现有方法通过(再)培训在每个目标领域更新源模式; 虽然有效,但再培训对数据的数量和顺序以及用于优化的超参数敏感; 相反,我们更新了目标数据,用一种基因化扩散模型对源领域的所有测试输入进行预测; 我们的传播驱动适应方法,即DDDA, 在所有领域共享其分类和生成模型; 两种模型都经过源域培训,然后在测试期间加以固定; 我们通过图像指导和自我聚合,自动决定调整多少; 裁军事务部的投入适应比以往在图像网络-C基准上的模型适应方法更加健全。 有了输入上的最新信息,DADA成功地使模型适应在小批量数据、非统一状态下依赖数据或与多重腐败混合数据方面降低过少的数据。