Human organs constantly undergo anatomical changes due to a complex mix of short-term (e.g., heartbeat) and long-term (e.g., aging) factors. Evidently, prior knowledge of these factors will be beneficial when modeling their future state, i.e., via image generation. However, most of the medical image generation tasks only rely on the input from a single image, thus ignoring the sequential dependency even when longitudinal data is available. Sequence-aware deep generative models, where model input is a sequence of ordered and timestamped images, are still underexplored in the medical imaging domain that is featured by several unique challenges: 1) Sequences with various lengths; 2) Missing data or frame, and 3) High dimensionality. To this end, we propose a sequence-aware diffusion model (SADM) for the generation of longitudinal medical images. Recently, diffusion models have shown promising results in high-fidelity image generation. Our method extends this new technique by introducing a sequence-aware transformer as the conditional module in a diffusion model. The novel design enables learning longitudinal dependency even with missing data during training and allows autoregressive generation of a sequence of images during inference. Our extensive experiments on 3D longitudinal medical images demonstrate the effectiveness of SADM compared with baselines and alternative methods. The code is available at https://github.com/ubc-tea/SADM-Longitudinal-Medical-Image-Generation.
翻译:由于短期(心跳)和长期(老龄化)因素的复杂组合,人体器官不断发生解剖变化。很显然,在模拟未来状态时,即通过图像生成,对这些因素的先前知识将是有益的。然而,大多数医学图像生成任务仅依赖于单一图像的输入,因此即使有纵向数据,也忽略了顺序依赖性。在模型输入为定序序列和定时图像的深度基因化模型中,模型输入在医学成像域中仍然处于探索不足状态,这几个独特的挑战突出:1)不同长度的序列;2)缺失的数据或框架,和3)高度的多元性。为此,我们提出了用于生成长度医学图像的测序扩散模型(SADM)。最近,传播模型在高度图像生成中显示了有希望的结果。我们的方法通过引入一个测序变异器作为传播模型的有条件模块,扩展了这一新技术,在传播模型中引入了以下几个独特的挑战:1) 长度序列序列成像;2) 缺少数据或GFD型图像的对比设计使得在生成基准期间可以进行长期的学习。在SAA-SD型模型中,甚至缺失的模型的模型中,可以进行长期依赖数据。