Combining multi-site data can strengthen and uncover trends, but is a task that is marred by the influence of site-specific covariates that can bias the data and therefore any downstream analyses. Post-hoc multi-site correction methods exist but have strong assumptions that often do not hold in real-world scenarios. Algorithms should be designed in a way that can account for site-specific effects, such as those that arise from sequence parameter choices, and in instances where generalisation fails, should be able to identify such a failure by means of explicit uncertainty modelling. This body of work showcases such an algorithm, that can become robust to the physics of acquisition in the context of segmentation tasks, while simultaneously modelling uncertainty. We demonstrate that our method not only generalises to complete holdout datasets, preserving segmentation quality, but does so while also accounting for site-specific sequence choices, which also allows it to perform as a harmonisation tool.
翻译:合并多站点数据可以加强和发现趋势,但这是一项任务,由于特定地点的共变体的影响而受到影响,这些共变体可能偏向于数据,因此也影响下游分析。 热后多站点校正方法存在,但有在现实世界情景中往往无法维持的强烈假设。 演算方法的设计应当能够考虑到特定地点的效果,例如由序列参数选择产生的效应,在笼统化失败的情况下,应当能够通过明确的不确定性建模来查明这种失败。 这套工作展示了一种算法,这种算法在分解任务中能够对获取的物理原理产生强大的影响,同时模拟不确定性。 我们证明,我们的方法不仅概括了完成搁置数据集、保持分化质量,而且还考虑到特定地点的序列选择,这也允许它发挥协调性工具的作用。