With the development of machine learning, it is difficult for a single server to process all the data. So machine learning tasks need to be spread across multiple servers, turning centralized machine learning into a distributed one. However, privacy remains an unsolved problem in distributed machine learning. Multi-key homomorphic encryption over torus (MKTFHE) is one of the suitable candidates to solve the problem. However, there may be security risks in the decryption of MKTFHE and the most recent result about MKFHE only supports the Boolean operation and linear operation. So, MKTFHE cannot compute the non-linear function like Sigmoid directly and it is still hard to perform common machine learning such as logistic regression and neural networks in high performance. This paper first introduces secret sharing to propose a new distributed decryption protocol for MKTFHE, then designs an MKTFHE-friendly activation function, and finally utilizes them to implement logistic regression and neural network training in MKTFHE. We prove the correctness and security of our decryption protocol and compare the efficiency and accuracy between using Taylor polynomials of Sigmoid and our proposed function as an activation function. The experiments show that the efficiency of our function is 10 times higher than using 7-order Taylor polynomials straightly and the accuracy of the training model is similar to that of using a high-order polynomial as an activation function scheme.
翻译:随着机器学习的发展,一个服务器很难处理所有数据。 因此, 机器学习任务需要分散在多个服务器, 将中央机器学习变成分布式的服务器。 但是, 在分布式机器学习中, 隐私仍然是一个尚未解决的问题。 多键同质加密 托鲁斯 (MKTFHE) 是解决问题的合适人选之一 。 然而, MKTFHE 的解密和 MKFHE 的最新结果可能存在安全风险, MKFHE 只能支持布利安操作和线性操作。 因此, MKTFHE 无法直接对像 Sigmoid 这样的非线性功能进行编译, 并且仍然很难进行共同的机器学习, 如高性计算机学习。 本文首先引入秘密共享, 为 MKTFHE 提议一个新的分布式解密协议, 然后设计一个方便MKTFHE的激活功能, 最后利用它们来在MKTHEHE中进行逻辑回归和神经网络培训。 我们证明了我们解析协议的正确性和安全性协议, 并且用Tayal IP 的更高精度功能来比较我们10 模拟模拟模拟模拟模拟模拟模拟的节能功能, 模拟模拟模拟模拟模拟的节能功能, 。