We describe federated reconnaissance, a class of learning problems in which distributed clients learn new concepts independently and communicate that knowledge efficiently. In particular, we propose an evaluation framework and methodological baseline for a system in which each client is expected to learn a growing set of classes and communicate knowledge of those classes efficiently with other clients, such that, after knowledge merging, the clients should be able to accurately discriminate between classes in the superset of classes observed by the set of clients. We compare a range of learning algorithms for this problem and find that prototypical networks are a strong approach in that they are robust to catastrophic forgetting while incorporating new information efficiently. Furthermore, we show that the online averaging of prototype vectors is effective for client model merging and requires only a small amount of communication overhead, memory, and update time per class with no gradient-based learning or hyperparameter tuning. Additionally, to put our results in context, we find that a simple, prototypical network with four convolutional layers significantly outperforms complex, state of the art continual learning algorithms, increasing the accuracy by over 22% after learning 600 Omniglot classes and over 33% after learning 20 mini-ImageNet classes incrementally. These results have important implications for federated reconnaissance and continual learning more generally by demonstrating that communicating feature vectors is an efficient, robust, and effective means for distributed, continual learning.
翻译:我们描述联盟式的考察,这是一种学习问题,分布式的客户可以独立地学习新概念,并有效地传播知识。我们特别提议了一个评价框架和方法基线,用于一个系统,每个客户都希望学习越来越多的班级,并与其他客户有效地交流这些班级的知识,这样,在知识合并后,客户应该能够准确地区分一组客户所观察的班级的超集班级。我们比较了这一问题的一系列学习算法,发现类型网络是一个强有力的方法,因为它们在有效纳入新信息的同时,能够有力地灾难性地遗忘。此外,我们表明原型矢量器的在线平均率对于客户模式的合并是有效的,只需要少量的通信管理、记忆和每班更新时间,而没有梯度学习或超度调整。此外,我们发现一个简单、有四层相形相近的网络,其结构非常复杂,艺术持续学习算法的状态,在学习600个Omglot班级和超过33%的原型矢量值之后提高了准确性。我们发现,通过学习20个持续式的移动式网络,通过持续式的不断更新的学习方式,可以展示一种高效的递增量的方法。