In artificial intelligence (AI), knowledge is the information required by an intelligent system to accomplish tasks. While traditional knowledge bases use discrete, symbolic representations, detecting knowledge encoded in the continuous representations learned from data has received increasing attention recently. In this work, we propose a method for building a continuous knowledge base (CKB) that can store knowledge imported from multiple, diverse neural networks. The key idea of our approach is to define an interface for each neural network and cast knowledge transferring as a function simulation problem. Experiments on text classification show promising results: the CKB imports knowledge from a single model and then exports the knowledge to a new model, achieving comparable performance with the original model. More interesting, we import the knowledge from multiple models to the knowledge base, from which the fused knowledge is exported back to a single model, achieving a higher accuracy than the original model. With the CKB, it is also easy to achieve knowledge distillation and transfer learning. Our work opens the door to building a universal continuous knowledge base to collect, store, and organize all continuous knowledge encoded in various neural networks trained for different AI tasks.
翻译:在人工智能(AI)中,知识是智能系统为完成任务所需要的信息。虽然传统知识基础使用离散、象征性的表示方式,探测从数据中获取的连续演示中编码的知识最近受到越来越多的关注。在这项工作中,我们提出了一个方法来建立一个连续的知识库(CKB),可以储存从多种不同的神经网络中获取的知识。我们的方法的关键思想是确定每个神经网络的接口,并将知识的转移作为一个功能模拟问题。关于文本分类的实验显示了有希望的结果:CKB从单一模型中获取知识,然后将知识输出到一个新的模型中,实现与原始模型的可比的绩效。更有意思的是,我们从多个模型中将知识输入到知识库,从该模型中将集成的知识输出到一个单一模型,从而实现比原始模型更高的准确性。与CKB相比,我们的方法也容易获得知识的提炼和转移学习。我们的工作打开了建立普遍的连续知识库的大门,以收集、储存和组织为不同AI任务而培训的各种神经网络中的所有不断编码的知识库。