The application of secure multiparty computation (MPC) in machine learning, especially privacy-preserving neural network training, has attracted tremendous attention from the research community in recent years. MPC enables several data owners to jointly train a neural network while preserving the data privacy of each participant. However, most of the previous works focus on semi-honest threat model that cannot withstand fraudulent messages sent by malicious participants. In this paper, we propose an approach for constructing efficient $n$-party protocols for secure neural network training that can provide security for all honest participants even when a majority of the parties are malicious. Compared to the other designs that provide semi-honest security in a dishonest majority setting, our actively secure neural network training incurs affordable efficiency overheads of around 2X and 2.7X in LAN and WAN settings, respectively. Besides, we propose a scheme to allow additive shares defined over an integer ring $\mathbb{Z}_N$ to be securely converted to additive shares over a finite field $\mathbb{Z}_Q$, which may be of independent interest. Such conversion scheme is essential in securely and correctly converting shared Beaver triples defined over an integer ring generated in the preprocessing phase to triples defined over a field to be used in the calculation in the online phase.
翻译:在机器学习中应用安全的多党计算(MPC),特别是隐私保护神经网络培训,近年来引起了研究界的极大关注。MPC使若干数据所有人能够联合培训神经网络,同时保护每个参与者的数据隐私。然而,以往的大部分工作侧重于半诚实的威胁模式,这种模式无法抵御恶意参与者发送的欺诈信息。在本文件中,我们提出一种方法,用于建造高效的美元方协议,用于安全神经网络培训,这种协议即使大多数当事方是恶意的,也能为所有诚实参与者提供安全。与在不诚实多数情况下提供半诚实安全的其他设计相比,我们积极安全的神经网络培训在局域和广域网环境中分别产生了约2X和2.7X的负担得起的效率管理,此外,我们提出一个计划,允许将一个整数为$\mathbb ⁇ N$的复合股份安全地转换成一个有限字段$\mathbb ⁇ $的添加份额,这可能具有独立利益。这种转换计划对于安全而正确地将共同的Beaver 3级转换成一个在使用前阶段确定的连续三连环计算中,在预先的实地中,必须由安全地和正确地将一个固定地和正确转换成一个固定的连续的连续的三阶段,在使用。