Outsourcing decision tree inference services to the cloud is highly beneficial, yet raises critical privacy concerns on the proprietary decision tree of the model provider and the private input data of the client. In this paper, we design, implement, and evaluate a new system that allows highly efficient outsourcing of decision tree inference. Our system significantly improves upon the state-of-the-art in the overall online end-to-end secure inference service latency at the cloud as well as the local-side performance of the model provider. We first presents a new scheme which securely shifts most of the processing of the model provider to the cloud, resulting in a substantial reduction on the model provider's performance complexities. We further devise a scheme which substantially optimizes the performance for encrypted decision tree inference at the cloud, particularly the communication round complexities. The synergy of these techniques allows our new system to achieve up to $8 \times$ better overall online end-to-end secure inference latency at the cloud side over realistic WAN environment, as well as bring the model provider up to $19 \times$ savings in communication and $18 \times$ savings in computation.
翻译:向云层外包决策树的推断服务非常有益,但也引起了对模型提供者的专有决策树和客户的私人输入数据的重大隐私关切。在本文中,我们设计、实施和评价了一个新的系统,以便高效地外包决策树推断。我们的系统大大改进了整个在线端到端安全安全推断服务在云层以及模型提供者的本地端工作表现方面的先进水平。我们首先提出了一个新方案,将模型提供者的大部分处理工作安全地转移到云层,从而大大减少了模型提供者的业绩复杂性。我们进一步设计了一个方案,大大优化了云层加密决策树推断的性能,特别是通信周期复杂性。这些技术的协同作用使得我们的新系统在云端到端总体安全推力方面达到8美元以上,使云端安全拉力在现实的广域网环境方面得到更好的利用。我们还制定了一个新的方案,使模型提供者在通信方面节省19美元,在计算方面节省18美元。