Deep generative models have emerged as promising tools for detecting arbitrary anomalies in data, dispensing with the necessity for manual labelling. Recently, autoregressive transformers have achieved state-of-the-art performance for anomaly detection in medical imaging. Nonetheless, these models still have some intrinsic weaknesses, such as requiring images to be modelled as 1D sequences, the accumulation of errors during the sampling process, and the significant inference times associated with transformers. Denoising diffusion probabilistic models are a class of non-autoregressive generative models recently shown to produce excellent samples in computer vision (surpassing Generative Adversarial Networks), and to achieve log-likelihoods that are competitive with transformers while having fast inference times. Diffusion models can be applied to the latent representations learnt by autoencoders, making them easily scalable and great candidates for application to high dimensional data, such as medical images. Here, we propose a method based on diffusion models to detect and segment anomalies in brain imaging. By training the models on healthy data and then exploring its diffusion and reverse steps across its Markov chain, we can identify anomalous areas in the latent space and hence identify anomalies in the pixel space. Our diffusion models achieve competitive performance compared with autoregressive approaches across a series of experiments with 2D CT and MRI data involving synthetic and real pathological lesions with much reduced inference times, making their usage clinically viable.
翻译:深基因模型已经出现,是发现数据中任意异常现象的有希望的工具,无需人工贴标签。最近,自动递减变压器在医学成像中取得了最先进的异常检测性能。然而,这些模型仍然有一些内在的弱点,例如要求将图像模拟成1D序列、取样过程中的误差积累以及与变压器相关的大量推导时间。低度扩散概率模型是最近显示的一类非非自动递减变动模型,以产生计算机视觉(超过基因反转网络)的出色样品,并实现在医学成像中与变异器竞争的极近距离性工作。这些模型可以适用于自动变异器所学的潜伏表层,使其易于缩缩缩缩,并成为应用高维度数据(如医学图象)的极值。在这里,我们提出了一种基于传播模型的方法,用以检测和分解脑成像的异常现象。通过对模型进行健康数据的培训,然后探索其跨MarkovD链的传播和反向反向步骤,同时具有快速的模拟。我们可以通过在空间变压模型中找到一种可变动的模型,从而在空间变相分析中进行自我分析。