Generalizability of deep learning models may be severely affected by the difference in the distributions of the train (source domain) and the test (target domain) sets, e.g., when the sets are produced by different hardware. As a consequence of this domain shift, a certain model might perform well on data from one clinic, and then fail when deployed in another. We propose a very light and transparent approach to perform test-time domain adaptation. The idea is to substitute the target low-frequency Fourier space components that are deemed to reflect the style of an image. To maximize the performance, we implement the "optimal style donor" selection technique, and use a number of source data points for altering a single target scan appearance (Multi-Source Transferring). We study the effect of severity of domain shift on the performance of the method, and show that our training-free approach reaches the state-of-the-art level of complicated deep domain adaptation models. The code for our experiments is released.
翻译:深层次学习模型的通用性可能受到火车分布差异(源域)和测试(目标域)集差异的严重影响,例如,当数据集由不同硬件生成时。由于这一领域的变化,某一模型可能在一个诊所的数据上表现良好,而在另一个诊所部署时则失败。我们提出了一个非常简单和透明的方法来进行测试-时间域适应。这个想法是要取代低频Fourier空间目标组件,认为这些组件反映了图像的风格。为了最大限度地发挥性能,我们采用了“最佳风格捐赠者”选择技术,并使用一些源数据点来改变单一目标扫描外观(多源转移)。我们研究了域转移对方法性能的影响,并表明我们的无培训方法达到了复杂广域适应模型的最先进的水平。我们实验的代码已经发布。