Human organs constantly undergo anatomical changes due to a complex mix of short-term (e.g., heartbeat) and long-term (e.g., aging) factors. Evidently, prior knowledge of these factors will be beneficial when modeling their future state, i.e., via image generation. However, most of the medical image generation tasks only rely on the input from a single image, thus ignoring the sequential dependency even when longitudinal data is available. Sequence-aware deep generative models, where model input is a sequence of ordered and timestamped images, are still underexplored in the medical imaging domain that is featured by several unique challenges: 1) Sequences with various lengths; 2) Missing data or frame, and 3) High dimensionality. To this end, we propose a sequence-aware diffusion model (SADM) for the generation of longitudinal medical images. Recently, diffusion models have shown promising results on high-fidelity image generation. Our method extends this new technique by introducing a sequence-aware transformer as the conditional module in a diffusion model. The novel design enables learning longitudinal dependency even with missing data during training and allows autoregressive generation of a sequence of images during inference. Our extensive experiments on 3D longitudinal medical images demonstrate the effectiveness of SADM compared with baselines and alternative methods.
翻译:由于短期(心跳)和长期(例如老龄化)因素的复杂组合,人体器官不断经历解剖变化。很显然,在模拟未来状态时,即通过图像生成,对这些因素的先前知识将是有益的。然而,大多数医学图像生成任务只依靠单一图像的输入,因此即使有纵向数据,也忽略了顺序依赖性。在模型输入为定序序列和时间戳图像的深层次基因化模型中,模型输入仍然未得到充分探索,这在医学成像领域存在一些独特的挑战:1)不同长度的序列;2)缺失的数据或框架,和3)高度的多元性。为此,我们提出了用于生成长度医学图像的测序扩散模型(SADM ) 。最近,传播模型显示了高度图像生成的有希望的结果。我们的方法通过引入序列变异变器作为传播模型的有条件模块而扩展了这一新技术,这几个独特的挑战突出:1) 不同长度的序列序列;2) 缺失的数据或框架;3) 高度的多元性。我们的新设计使得在生成的医学模型中进行长期研究,能够进行长期的排序数据。