We consider learning and compositionality as the key mechanisms towards simulating human-like intelligence. While each mechanism is successfully achieved by neural networks and symbolic AIs, respectively, it is the combination of the two mechanisms that makes human-like intelligence possible. Despite the numerous attempts on building hybrid neuralsymbolic systems, we argue that our true goal should be unifying learning and compositionality, the core mechanisms, instead of neural and symbolic methods, the surface approaches to achieve them. In this work, we review and analyze the strengths and weaknesses of neural and symbolic methods by separating their forms and meanings (structures and semantics), and propose Connectionist Probabilistic Program (CPPs), a framework that connects connectionist structures (for learning) and probabilistic program semantics (for compositionality). Under the framework, we design a CPP extension for small scale sequence modeling and provide a learning algorithm based on Bayesian inference. Although challenges exist in learning complex patterns without supervision, our early results demonstrate CPP's successful extraction of concepts and relations from raw sequential data, an initial step towards compositional learning.
翻译:我们认为,学习和构成是模拟人性智能的关键机制。虽然每个机制分别通过神经网络和象征性AI成功实现了神经网络和象征性AI,但正是这两个机制的结合使得人类性智能成为可能。尽管在建立混合神经同步系统方面做了许多尝试,但我们认为,我们的真正目标应该是统一学习和构成,核心机制而不是神经和象征方法,实现这些功能的表面方法。在这项工作中,我们审查和分析神经和象征方法的优缺点,将它们的形式和意义(结构和语义)区分开来,并提议连接连接连接连接连接连接通信结构(用于学习)和概率程序精度(用于构建性)的框架(CPPs ) 。 在该框架中,我们设计了一个小规模序列模型扩展,并提供基于Bayesian推理的学习算法。尽管在学习复杂的模式方面存在着挑战,但我们的早期结果表明CPP成功地从原始序列数据中提取了概念和关系,这是向形成结构学习的第一步。