In the domain of Privacy-Preserving Machine Learning (PPML), Fully Homomorphic Encryption (FHE) is often used for encrypted computation to allow secure and privacy-preserving outsourcing of machine learning modeling. While FHE enables encrypted arithmetic operations, execution of programmatic logic such as control structures or conditional programming have remained a challenge. As a result, progress in encrypted training of PPML with FHE has been relatively stagnant compared to encrypted inference owing to the considerably higher logical complexity required in training. In addition, prior works that have demonstrated encrypted training use Interactive Rounds of Decryption and Evaluation (IRDE), where certain operations are decrypted and evaluated in plaintext using interactive rounds between the untrusted computing party (server) and the trusted private-key owner (client). In decision tree training for example, the current state-of-the-art requires d-rounds of IRDE for tree-depth of d. To address this issue in PPML and FHE, we introduce the Blind Evaluation Framework (BEF), a cryptographically secure programming framework that enables blind, but correct, execution of programming logic without IRDE. This is achieved by deconstructing programming logic into binary circuits and binary arithmetic to find alternative representations of logical statements, and adopting them to FHE for secure logical programming. To the best of our knowledge, this is the first framework to enable both training and inference of PPML models with FHE without decryption rounds. By advancing the state-of-the-art in IRDE efficiency by eliminating IRDE entirely, BEF enables adoption of FHE in use-cases where large amounts of computing services are available without the ability to have trusted clients available to perform decryption rounds.
翻译:暂无翻译