Image synthesis is expected to provide value for the translation of machine learning methods into clinical practice. Fundamental problems like model robustness, domain transfer, causal modelling, and operator training become approachable through synthetic data. Especially, heavily operator-dependant modalities like Ultrasound imaging require robust frameworks for image and video generation. So far, video generation has only been possible by providing input data that is as rich as the output data, e.g., image sequence plus conditioning in, video out. However, clinical documentation is usually scarce and only single images are reported and stored, thus retrospective patient-specific analysis or the generation of rich training data becomes impossible with current approaches. In this paper, we extend elucidated diffusion models for video modelling to generate plausible video sequences from single images and arbitrary conditioning with clinical parameters. We explore this idea within the context of echocardiograms by looking into the variation of the Left Ventricle Ejection Fraction, the most essential clinical metric gained from these examinations. We use the publicly available EchoNet-Dynamic dataset for all our experiments. Our image to sequence approach achieves an R2 score of 93%, which is 38 points higher than recently proposed sequence to sequence generation methods. A public demo is available here: bit.ly/3HTskPF. Code and models will be available at: https://github.com/HReynaud/EchoDiffusion.
翻译:注意:Proper nouns保持英文不变。
特征条件级联视频扩散模型用于精确的心脏超声图像合成
摘要:图像合成预计为将机器学习方法转化为临床实践提供价值。基本问题,如模型稳健性,域转移,因果建模和操作员训练,可以通过合成数据来解决。尤其是像超声成像这样严重依赖操作员的模式需要稳健的图像和视频生成框架。到目前为止,只有通过提供像输出数据一样丰富的输入数据(例如,图像序列加条件输入,视频输出)才能实现视频生成。然而,临床记录通常很少,只有单个图像被报告和存储,因此回顾性患者特定分析或生成丰富的训练数据在当前方法中变得不可能。在本文中,我们将阐明的扩散模型扩展到视频建模中,从单个图像和任意条件生成合理的视频序列,使用临床参数。我们通过研究左心室射血分数的变化来探究这个想法,这是从这些检查中获得的最重要的临床指标。我们使用公共可用的EchoNet-Dynamic数据集进行所有实验。我们的图像到序列方法实现了93%的R2分数,比最近提出的序列到序列生成方法高38个点,公共演示位于此处:bit.ly/3HTskPF。代码和模型将在此处提供:https://github.com/HReynaud/EchoDiffusion。