With the ever-widening spread of the Internet of Things (IoT) and Edge Computing paradigms, centralized Machine and Deep Learning (ML/DL) have become challenging due to existing distributed data silos containing sensitive information. The rising concern for data privacy is promoting the development of collaborative and privacy-preserving ML/DL techniques such as Federated Learning (FL). FL enables data privacy by design since the local data of participants are not exposed during the creation of the global and collaborative model. However, data privacy and performance are no longer sufficient, and there is a real necessity to trust model predictions. The literature has proposed some works on trustworthy ML/DL (without data privacy), where robustness, fairness, explainability, and accountability are identified as relevant pillars. However, more efforts are needed to identify trustworthiness pillars and evaluation metrics relevant to FL models and to create solutions computing the trustworthiness level of FL models. Thus, this work analyzes the existing requirements for trustworthiness evaluation in FL and proposes a comprehensive taxonomy of six pillars (privacy, robustness, fairness, explainability, accountability, and federation) with notions and more than 30 metrics for computing the trustworthiness of FL models. Then, an algorithm called FederatedTrust has been designed according to the pillars and metrics identified in the previous taxonomy to compute the trustworthiness score of FL models. A prototype of FederatedTrust has been implemented and deployed into the learning process of FederatedScope, a well-known FL framework. Finally, four experiments performed with different configurations of FederatedScope using the FEMNIST dataset under different federation configurations demonstrated the usefulness of FederatedTrust when computing the trustworthiness of FL models.
翻译:随着Things(IOT)和Everge Internet的互联网日益普及,中央机器和深层学习(ML/DL)由于现有分布式数据库中包含敏感信息而变得具有挑战性。对数据隐私的日益关注正在促进协作和隐私保护ML/DL技术的发展,例如Faled Learning(FL)。FL通过设计使数据隐私成为数据隐私,因为在全球和协作模式创建期间,参与者的当地数据没有披露。然而,数据隐私和性能已经不再足够,而且确实需要信任模型预测。文献提出了关于可靠的 ML/DL(没有数据隐私)的一些工作,其中确定稳健、公平、可解释性和问责制和问责是相关的支柱。然而,需要做出更多努力,确定与FL模式相关的可靠支柱和评价指标,并创建计算FL模型可信度水平的解决方案。因此,这项工作分析了目前对FL的准确性评价现有要求,并提议在六大支柱(FL的深度、准确性、可解释性、可解释性、可解释性、可解释性、可解释性、可解释性、可解释性、可理解性、可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性可达性、可达性、可达性可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性、可达性可达性可达性、可达性、可达性可达性等可达,可达,可达性可达性可达性可达性可达性可达,可达性可达性可达性可达,可达性可达性可达性可达性可达性可达性可达性可达性可达性可达性可达性可达性可达性可达性可达性可达性可达性可达性可达性可达性可达性可达性可达性可达性