The past decade has seen a rapid adoption of Artificial Intelligence (AI), specifically the deep learning networks, in Internet of Medical Things (IoMT) ecosystem. However, it has been shown recently that the deep learning networks can be exploited by adversarial attacks that not only make IoMT vulnerable to the data theft but also to the manipulation of medical diagnosis. The existing studies consider adding noise to the raw IoMT data or model parameters which not only reduces the overall performance concerning medical inferences but also is ineffective to the likes of deep leakage from gradients method. In this work, we propose proximal gradient split learning (PSGL) method for defense against the model inversion attacks. The proposed method intentionally attacks the IoMT data when undergoing the deep neural network training process at client side. We propose the use of proximal gradient method to recover gradient maps and a decision-level fusion strategy to improve the recognition performance. Extensive analysis show that the PGSL not only provides effective defense mechanism against the model inversion attacks but also helps in improving the recognition performance on publicly available datasets. We report 14.0$\%$, 17.9$\%$, and 36.9$\%$ gains in accuracy over reconstructed and adversarial attacked images, respectively.
翻译:过去十年来,人们迅速采用了人工智能(AI),具体来说就是在医学物质(IOMT)生态系统的互联网上迅速采用了深层次的学习网络,然而,最近显示,深层次的学习网络可以通过对抗性攻击加以利用,这种攻击不仅使IOMT易受数据盗窃的伤害,而且还会受到医学诊断的操纵。现有的研究考虑在原生的IMT数据或模型参数中增加噪音,这不仅会降低医学推断的整体性能,而且对梯度方法的深度渗漏的类似情况也是无效的。在这项工作中,我们提出了防止模式反向攻击的近度梯度梯分化学习(PSGL)方法。拟议的方法在客户方进行深层神经网络培训过程中故意攻击IOMT数据。我们提议使用准度梯度梯度方法来恢复梯度地图,并采用决策级混编战略来提高认知性能。广泛的分析表明,PGSL不仅提供有效防范模式反向攻击的防范机制,而且还有助于改进公开数据集的认知性工作。我们分别报告14.0美元、17.9美元和36美元反向图像的准确度。