The need for robust, secure and private machine learning is an important goal for realizing the full potential of the Internet of Things (IoT). Federated learning has proven to help protect against privacy violations and information leakage. However, it introduces new risk vectors which make machine learning models more difficult to defend against adversarial samples. In this study, we examine the role of differential privacy and self-normalization in mitigating the risk of adversarial samples specifically in a federated learning environment. We introduce DiPSeN, a Differentially Private Self-normalizing Neural Network which combines elements of differential privacy noise with self-normalizing techniques. Our empirical results on three publicly available datasets show that DiPSeN successfully improves the adversarial robustness of a deep learning classifier in a federated learning environment based on several evaluation metrics.
翻译:联邦学习证明有助于防止侵犯隐私和信息泄漏,然而,它引入了新的风险矢量,使机器学习模式更难防御对抗性样本。在本研究中,我们研究了不同隐私和自我正常化在减少对抗性样本风险方面的作用,特别是在一个联合学习环境中。我们引入了DiPSN,这是一个差异化的私人自我标准化神经网络,将不同隐私噪音和自我正常化技术结合起来。我们关于三个公开数据集的实证结果表明,DiPSEN成功地改善了在基于若干评价指标的联邦学习环境中深层次学习分类者的对抗性强势。