Many existing privacy-enhanced speech emotion recognition (SER) frameworks focus on perturbing the original speech data through adversarial training within a centralized machine learning setup. However, this privacy protection scheme can fail since the adversary can still access the perturbed data. In recent years, distributed learning algorithms, especially federated learning (FL), have gained popularity to protect privacy in machine learning applications. While FL provides good intuition to safeguard privacy by keeping the data on local devices, prior work has shown that privacy attacks, such as attribute inference attacks, are achievable for SER systems trained using FL. In this work, we propose to evaluate the user-level differential privacy (UDP) in mitigating the privacy leaks of the SER system in FL. UDP provides theoretical privacy guarantees with privacy parameters $\epsilon$ and $\delta$. Our results show that the UDP can effectively decrease attribute information leakage while keeping the utility of the SER system with the adversary accessing one model update. However, the efficacy of the UDP suffers when the FL system leaks more model updates to the adversary. We make the code publicly available to reproduce the results in https://github.com/usc-sail/fed-ser-leakage.
翻译:许多现有的隐私强化语音情绪识别(SER)框架强调通过中央机器学习机制内的对抗性培训干扰原始语音数据,但这一隐私保护计划可能失败,因为对手仍然可以访问扰动的数据。近年来,分布式学习算法,特别是联合学习(FL),在机器学习应用中越来越受欢迎,以保护隐私。虽然FL通过保留当地设备的数据,为保护隐私提供了良好的直觉,但先前的工作表明,对使用FL培训的SER系统而言,隐私攻击,如属性推断攻击是可以实现的。在这项工作中,我们提议评估用户一级差异性隐私(UDP)在减少FLSER系统隐私泄露方面的差异。UDP提供理论性隐私保障,包括隐私参数$\epslon$和$\delta$。我们的结果表明,UNDP可以有效地减少信息泄漏的属性,同时保持SER系统的效用,同时将一个模型更新的对等对立点保持。然而,当FL系统向对手泄漏更多的模型更新时,UDP的效力会受到影响。我们把代码公开用于复制 AL-abs/ailus/abs。