Multi-party computation (MPC) is a branch of cryptography where multiple non-colluding parties execute a well designed protocol to securely compute a function. With the non-colluding party assumption, MPC has a cryptographic guarantee that the parties will not learn sensitive information from the computation process, making it an appealing framework for applications that involve privacy-sensitive user data. In this paper, we study training and inference of neural networks under the MPC setup. This is challenging because the elementary operations of neural networks such as the ReLU activation function and matrix-vector multiplications are very expensive to compute due to the added multi-party communication overhead. To address this, we propose the HD-cos network that uses 1) cosine as activation function, 2) the Hadamard-Diagonal transformation to replace the unstructured linear transformations. We show that both of the approaches enjoy strong theoretical motivations and efficient computation under the MPC setup. We demonstrate on multiple public datasets that HD-cos matches the quality of the more expensive baselines.
翻译:多方计算(MPC)是加密学的一个分支,其中多个非污染方执行了一个设计良好的协议,以安全计算功能。在非污染方假设的情况下,MPC有一个加密保证,即各方不会从计算过程中学习敏感信息,从而使其成为涉及隐私敏感用户数据的吸引应用框架。在本文中,我们研究在MPC设置下对神经网络的培训和推断。这是具有挑战性的,因为由于增加了多方通信管理费,ReLU激活功能和矩阵-矢量乘数等神经网络的基本运行费用非常昂贵。为了解决这个问题,我们提议使用HD-cos网络,1) 将cosine作为激活功能,2) Hadamard- Diagonal 转换,以取代非结构化的线性转换。我们在MPC设置下,这两种方法都享有强烈的理论动机和高效计算。我们在多个公共数据集中显示,HD-cos符合更昂贵基线的质量。