Causally-enabled machine learning frameworks could help clinicians to identify the best course of treatments by answering counterfactual questions. We explore this path for the case of echocardiograms by looking into the variation of the Left Ventricle Ejection Fraction, the most essential clinical metric gained from these examinations. We combine deep neural networks, twin causal networks and generative adversarial methods for the first time to build D'ARTAGNAN (Deep ARtificial Twin-Architecture GeNerAtive Networks), a novel causal generative model. We demonstrate the soundness of our approach on a synthetic dataset before applying it to cardiac ultrasound videos by answering the question: "What would this echocardiogram look like if the patient had a different ejection fraction?". To do so, we generate new ultrasound videos, retaining the video style and anatomy of the original patient, with variations of the Ejection Fraction conditioned on a given input. We achieve an SSIM score of 0.79 and an R2 score of 0.51 on the counterfactual videos. Code and models are available at https://github.com/HReynaud/dartagnan.
翻译:通过回答反事实问题,我们探索回声心动图案例的路径。我们通过研究从这些检查中获得的最基本临床计量标准左侧心电图分数的变异,探索回声心动图案例的这一路径。我们把深神经网络、双因果网络和基因对抗方法首次结合起来,以建立D'ARTAGANAN(深重力双构建筑GENATIONATION),这是一个全新的因果变异模型。我们在将合成数据集应用到心脏超声波视频之前,先回答问题,展示了我们的方法的正确性:“如果病人有不同的弹出分数,这种回声心动图会是什么样子?”为了做到这一点,我们制作了新的超声波视频,保留了视频样式和原始病人的解剖,同时根据给定的输入条件对弹道FARTARTADADM进行了变异。我们取得了0.79分的SSIM分和0.51分的反事实性视频R2分数。代码和模型可在 https://giveth/natualub.com上查到。