Federated learning has become increasingly widespread due to its ability to train models collaboratively without centralizing sensitive data. While most research on FL emphasizes privacy-preserving techniques during training, the evaluation phase also presents significant privacy risks that have not been adequately addressed in the literature. In particular, the state-of-the-art solution for computing the area under the curve (AUC) in FL systems employs differential privacy, which not only fails to protect against a malicious aggregator but also suffers from severe performance degradation on smaller datasets. To overcome these limitations, we propose a novel evaluation method that leverages fully homomorphic encryption. To the best of our knowledge, this is the first work to apply FHE to privacy-preserving model evaluation in federated learning while providing verifiable security guarantees. In our approach, clients encrypt their true-positive and false-positive counts based on predefined thresholds and submit them to an aggregator, which then performs homomorphic operations to compute the global AUC without ever seeing intermediate or final results in plaintext. We offer two variants of our protocol: one secure against a semi-honest aggregator and one that additionally detects and prevents manipulations by a malicious aggregator. Besides providing verifiable security guarantees, our solution achieves superior accuracy across datasets of any size and distribution, eliminating the performance issues faced by the existing state-of-the-art method on small datasets and its runtime is negligibly small and independent of the test-set size. Experimental results confirm that our method can compute the AUC among 100 parties in under two seconds with near-perfect (99.93%) accuracy while preserving complete data privacy.
翻译:暂无翻译