The past decade has seen a rapid adoption of Artificial Intelligence (AI), specifically the deep learning networks, in Internet of Medical Things (IoMT) ecosystem. However, it has been shown recently that the deep learning networks can be exploited by adversarial attacks that not only make IoMT vulnerable to the data theft but also to the manipulation of medical diagnosis. The existing studies consider adding noise to the raw IoMT data or model parameters which not only reduces the overall performance concerning medical inferences but also is ineffective to the likes of deep leakage from gradients method. In this work, we propose proximal gradient split learning (PSGL) method for defense against the model inversion attacks. The proposed method intentionally attacks the IoMT data when undergoing the deep neural network training process at client side. We propose the use of proximal gradient method to recover gradient maps and a decision-level fusion strategy to improve the recognition performance. Extensive analysis show that the PGSL not only provides effective defense mechanism against the model inversion attacks but also helps in improving the recognition performance on publicly available datasets. We report 17.9$\%$ and 36.9$\%$ gains in accuracy over reconstructed and adversarial attacked images, respectively.
翻译:过去十年来,人们迅速采用了人工智能(AI),特别是深入学习网络,在医学物质(IOMT)生态系统的互联网中采用人工智能(AI),特别是深层学习网络;然而,最近显示,深层学习网络可以通过对抗性攻击加以利用,这种攻击不仅使IOMT容易受到数据盗窃的伤害,而且会受到医学诊断的操纵;现有研究考虑在原始IOMT数据或模型参数中增加噪音,这不仅会降低医学推断的整体性能,而且对梯度方法的深度渗漏情况也无效;在这项工作中,我们提出了针对模式反向攻击的准梯度梯度分离学习(PSGL)方法;拟议方法有意攻击IOMT数据,在客户方进行深神经网络培训过程时,我们提议使用准度梯度梯度方法来恢复梯度地图,并采用决策级混凝战略来改进认知性工作;广泛分析表明,PGSL不仅提供有效防范模式反向攻击的防范机制,而且还有助于改进公开数据集的认知性攻击。 我们报告,分别攻击17.9美元和36.9美元的对抗性图像的精确度。