Harmonization improves data consistency and is central to effective integration of diverse imaging data acquired across multiple sites. Recent deep learning techniques for harmonization are predominantly supervised in nature and hence require imaging data of the same human subjects to be acquired at multiple sites. Data collection as such requires the human subjects to travel across sites and is hence challenging, costly, and impractical, more so when sufficient sample size is needed for reliable network training. Here we show how harmonization can be achieved with a deep neural network that does not rely on traveling human phantom data. Our method disentangles site-specific appearance information and site-invariant anatomical information from images acquired at multiple sites and then employs the disentangled information to generate the image of each subject for any target site. We demonstrate with more than 6,000 multi-site T1- and T2-weighted images that our method is remarkably effective in generating images with realistic site-specific appearances without altering anatomical details. Our method allows retrospective harmonization of data in a wide range of existing modern large-scale imaging studies, conducted via different scanners and protocols, without additional data collection.
翻译:数据采集本身要求人类主体跨地点旅行,因此具有挑战性、费用高和不切实际,因此,当可靠的网络培训需要足够的样本规模时,就更不切实际了。这里我们展示了如何与不依赖流动人类幻影数据的深层神经网络实现协调统一。我们的方法分解了特定地点的外观信息和从多个地点获得的图像中获得的站点变异解解解剖学信息,然后使用分解的信息为任何目标地点生成每个主题的图像。我们用6,000多站点T1和T2-加权图像证明,我们的方法非常有效,在不改变解剖细节的情况下,生成现实地点外观的图像。我们的方法允许通过不同的扫描器和协议,对现有一系列广泛的现代大规模成像研究进行追溯性协调,而无需额外的数据收集。