Image synthesis is expected to provide value for the translation of machine learning methods into clinical practice. Fundamental problems like model robustness, domain transfer, causal modelling, and operator training become approachable through synthetic data. Especially, heavily operator-dependant modalities like Ultrasound imaging require robust frameworks for image and video generation. So far, video generation has only been possible by providing input data that is as rich as the output data, e.g., image sequence plus conditioning in, video out. However, clinical documentation is usually scarce and only single images are reported and stored, thus retrospective patient-specific analysis or the generation of rich training data becomes impossible with current approaches. In this paper, we extend elucidated diffusion models for video modelling to generate plausible video sequences from single images and arbitrary conditioning with clinical parameters. We explore this idea within the context of echocardiograms by looking into the variation of the Left Ventricle Ejection Fraction, the most essential clinical metric gained from these examinations. We use the publicly available EchoNet-Dynamic dataset for all our experiments. Our image to sequence approach achieves an $R^2$ score of 93%, which is 38 points higher than recently proposed sequence to sequence generation methods. Code and models will be available at: https://github.com/HReynaud/EchoDiffusion.
翻译:图像合成有望为将机器学习方法转化为临床实践提供价值。模型的鲁棒性,领域转移,因果建模和操作员培训等基本问题通过合成数据变得可行。特别是,像超声成像这样严重依赖操作员的模态需要强健的图像和视频生成框架。到目前为止,视频生成只能通过提供与输出数据一样丰富的输入数据来实现,例如,图像序列加上基于临床参数的调节输入,输出视频。然而,临床文献通常很少,只有单个图像被记录和存储,因此用当前方法无法进行回顾性病人特异性分析或产生丰富的训练数据。在本文中,我们将追溯性扩散模型用于视频建模,从而可以从单个图像和任意调节生成合理的视频序列。在强调心脏超声图像的变化时,我们探讨了这个想法。我们使用了公开可用的EchoNet-Dynamic数据集来进行所有实验。我们的图像到序列方法实现了93%的$ R ^ 2 $分数,比最近提出的序列到序列生成方法高38个点。代码和模型将在https://github.com/HReynaud/EchoDiffusion上提供。