Deep Learning (DL) methods have shown promising results for solving ill-posed inverse problems such as MR image reconstruction from undersampled $k$-space data. However, these approaches currently have no guarantees for reconstruction quality and the reliability of such algorithms is only poorly understood. Adversarial attacks offer a valuable tool to understand possible failure modes and worst case performance of DL-based reconstruction algorithms. In this paper we describe adversarial attacks on multi-coil $k$-space measurements and evaluate them on the recently proposed E2E-VarNet and a simpler UNet-based model. In contrast to prior work, the attacks are targeted to specifically alter diagnostically relevant regions. Using two realistic attack models (adversarial $k$-space noise and adversarial rotations) we are able to show that current state-of-the-art DL-based reconstruction algorithms are indeed sensitive to such perturbations to a degree where relevant diagnostic information may be lost. Surprisingly, in our experiments the UNet and the more sophisticated E2E-VarNet were similarly sensitive to such attacks. Our findings add further to the evidence that caution must be exercised as DL-based methods move closer to clinical practice.
翻译:深度学习(DL)方法在解决错误的反向问题(例如用未充分抽样的美元空间数据进行MR图像重建)方面显示了有希望的结果。然而,这些方法目前没有重建质量的保证,这种算法的可靠性也只是不太为人所知。反向攻击为了解基于DL的重建算法的可能失败模式和最差的成绩提供了宝贵的工具。在本文中,我们描述了对多油美元空间测量的对抗性攻击,并对最近提议的E2E-VarNet和一个更简单的UNet模型进行了评价。与以前的工作不同,这些攻击的目标是具体改变诊断相关的区域。我们使用两种现实的攻击模型(对抗的美元空间噪音和对抗性旋转)能够表明,目前基于DL的重建算法对于这种扰动确实十分敏感,以致可能丢失相关的诊断信息。令人惊讶的是,在我们的实验中,UNet和更为复杂的E2E-VarNet对此类攻击同样敏感。我们的调查结果进一步表明,必须更接近以DL为基础的临床方法。