Mixing arithmetic and boolean circuits to perform privacy-preserving machine learning has become increasingly popular. Towards this, we propose a framework for the case of four parties with at most one active corruption called Tetrad. Tetrad works over rings and supports two levels of security, fairness and robustness. The fair multiplication protocol costs 5 ring elements, improving over the state-of-the-art Trident (Chaudhari et al. NDSS'20). A key feature of Tetrad is that robustness comes for free over fair protocols. Other highlights across the two variants include (a) probabilistic truncation without overhead, (b) multi-input multiplication protocols, and (c) conversion protocols to switch between the computational domains, along with a tailor-made garbled circuit approach. Benchmarking of Tetrad for both training and inference is conducted over deep neural networks such as LeNet and VGG16. We found that Tetrad is up to 4 times faster in ML training and up to 5 times faster in ML inference. Tetrad is also lightweight in terms of deployment cost, costing up to 6 times less than Trident.
翻译:混合算术和布伦电路以进行隐私保护机器学习越来越受欢迎。 为此,我们提议了一个框架,用于四个政党的情况,四个政党最多有一个活跃的腐败,称为Tetrad。Tetrad在环上工作,支持两个安全、公平和稳健的级别。公平的乘法协议需要5个环元素,在最先进的Trident(Chaudhari等人,NDSS'20)方面有所改进。Tetrad的一个关键特征是:稳健是免费的,而不是公平的协议。两个变式的其他亮点包括:(a)无间接费用的概率暴动,(b)多投入倍增协议,和(c)转换程序,在计算区域之间转换,同时采用量制加装的电路法。在诸如LNet和VGG16等深层神经网络上对Tetrad进行了基准调整。我们发现,在ML培训中,Tetrad速度高达4倍,在ML推导中速度高达5倍。Tetrad在部署成本方面,成本也比6次低。