We study the privacy implications of deploying recurrent neural networks in machine learning. We consider membership inference attacks (MIAs) in which an attacker aims to infer whether a given data record has been used in the training of a learning agent. Using existing MIAs that target feed-forward neural networks, we empirically demonstrate that the attack accuracy wanes for data records used earlier in the training history. Alternatively, recurrent networks are specifically designed to better remember their past experience; hence, they are likely to be more vulnerable to MIAs than their feed-forward counterparts. We develop a pair of MIA layouts for two primary applications of recurrent networks, namely, deep reinforcement learning and sequence-to-sequence tasks. We use the first attack to provide empirical evidence that recurrent networks are indeed more vulnerable to MIAs than feed-forward networks with the same performance level. We use the second attack to showcase the differences between the effects of overtraining recurrent and feed-forward networks on the accuracy of their respective MIAs. Finally, we deploy a differential privacy mechanism to resolve the privacy vulnerability that the MIAs exploit. For both attack layouts, the privacy mechanism degrades the attack accuracy from above 80% to 50%, which is equal to guessing the data membership uniformly at random, while trading off less than 10% utility.
翻译:我们研究在机器学习中部署经常性神经网络的隐私影响。 我们考虑在机器学习中部署经常性神经网络的隐私影响。 我们考虑成员推导攻击(MIAs),攻击者在攻击中旨在推断某一数据记录是否用于培训一个学习代理人。 我们使用针对饲料向神经网络的现有MIAs,实证地证明,在培训史上早些时候使用的数据记录中,攻击准确性弱于攻击性精确度。 或者, 经常网络的具体设计是为了更好地记住它们过去的经验; 因此,它们可能比向后反馈者更容易受到MIAs的伤害。 我们为经常性网络的两种主要应用,即深强化学习和顺序至顺序任务,开发了两套MIA布局布局。 我们利用第一次攻击提供经验性证据表明,重复性网络确实比以同一性能水平的向上的数据网络更容易受到MIAs。 我们利用第二次攻击来展示过度训练经常性网络和向往反馈网络对其各自MIAs准确性的影响之间的差别。 最后,我们运用了一种差异性隐私机制,解决MIAs利用的隐私脆弱性,即深层强化学习和顺序排列任务。 对于50个攻击的保密机制来说, 而对等的保密精确性数据机制是从80的精确性地假设, 。