In federated learning (FL), data providers jointly train a machine learning model without sharing their training data. This makes it challenging to provide verifiable claims about properties of the final trained FL model, e.g., related to the employed training data, the used data sanitization, or the correct training algorithm -- a malicious data provider can simply deviate from the correct training protocol without being detected. While prior FL training systems have explored the use of trusted execution environments (TEEs) to combat such attacks, existing approaches struggle to link attestation proofs from TEEs robustly and effectively with claims about the trained FL model. TEEs have also been shown to suffer from a wide range of attacks, including side-channel attacks. We describe VerifiableFL, a system for training FL models that provides verifiable claims about trained models with the help of runtime attestation proofs. VerifiableFL generates such proofs using the new abstraction of exclaves, which are integrity-only execution environments without any secrets, thus making them immune to data leakage attacks. Whereas previous approaches only attested whole TEEs statically, i.e., at deployment time, VerifiableFL uses exclaves to attest individual data transformations during FL training. These runtime attestation proofs then form an attested dataflow graph of the entire FL model training computation. The graph can be checked by an auditor to ensure that the trained FL model satisfies its verifiable claims, such as the use of particular data sanitization by data providers or aggregation strategy by the model provider. We implement VerifiableFL by extending NVIDIA's NVFlare FL framework to use exclaves, and show that VerifiableFL introduces less than 10% overhead compared to unprotected FL model training.
翻译:暂无翻译