Conventional centralised deep learning paradigms are not feasible when data from different sources cannot be shared due to data privacy or transmission limitation. To resolve this problem, federated learning has been introduced to transfer knowledge across multiple sources (clients) with non-shared data while optimising a globally generalised central model (server). Existing federated learning paradigms mostly focus on transferring holistic high-level knowledge (such as class) across models, which are closely related to specific objects of interest so may suffer from inverse attack. In contrast, in this work, we consider transferring mid-level semantic knowledge (such as attribute) which is not sensitive to specific objects of interest and therefore is more privacy-preserving and scalable. To this end, we formulate a new Federated Zero-Shot Learning (FZSL) paradigm to learn mid-level semantic knowledge at multiple local clients with non-shared local data and cumulatively aggregate a globally generalised central model for deployment. To improve model discriminative ability, we propose to explore semantic knowledge augmentation from external knowledge for enriching the mid-level semantic space in FZSL. Extensive experiments on five zeroshot learning benchmark datasets validate the effectiveness of our approach for optimising a generalisable federated learning model with mid-level semantic knowledge transfer.
翻译:如果由于数据隐私或传输限制,不同来源的数据无法共享,则常规的深层学习模式不可行,因为数据保密性或传输限制,不同来源的数据无法共享。为解决这一问题,采用了联谊学习,在多种来源(客户)之间转让知识,不共享数据,同时优化全球通用中央模式(服务器)。现有的联谊学习模式主要侧重于在各种模式之间转让整体高级知识(如阶级),这些知识与具体感兴趣的对象密切相关,因此可能受到反向攻击。与此形成对照的是,我们考虑转让中级语义知识(如属性),这种知识对特定感兴趣的对象不敏感,因此更能保护隐私和伸缩。为此,我们制定了一个新的联谊Zero-Shot学习(FZSLS)模式,在多个当地客户中学习中层语言知识,这些用户拥有不共享的本地数据,累积了一个全球通用中央模型,因此可能受到反向攻击。与此形成对照的是,我们建议探索从外部知识中增加语义知识(如属性),以丰富FZSLSLSL的中层建模空间,因此更能保留和可伸缩缩。我们通用数据库基础的五级数据库数据库数据库基础测试。