Ten years into the revival of deep networks and artificial intelligence, we propose a theoretical framework that sheds light on understanding deep networks within a bigger picture of Intelligence in general. We introduce two fundamental principles, Parsimony and Self-consistency, that address two fundamental questions regarding Intelligence: what to learn and how to learn, respectively. We believe the two principles are the cornerstones for the emergence of Intelligence, artificial or natural. While these two principles have rich classical roots, we argue that they can be stated anew in entirely measurable and computable ways. More specifically, the two principles lead to an effective and efficient computational framework, compressive closed-loop transcription, that unifies and explains the evolution of modern deep networks and many artificial intelligence practices. While we mainly use modeling of visual data as an example, we believe the two principles will unify understanding of broad families of autonomous intelligent systems and provide a framework for understanding the brain.
翻译:在恢复深层次网络和人工智能十年后,我们提出了一个理论框架,在总体情报的大图中,揭示对深层次网络的理解。我们引入了两个基本原则,即“Parsimony”和“自我一致性 ”, 分别涉及关于情报的两个基本问题:学习什么和如何学习。我们认为,这两项原则是出现人工或自然情报的基石。虽然这两项原则有着丰富的传统根源,但我们认为,这两项原则可以用完全可计量和可比较的方式重新表述。更具体地说,这两项原则导致一个高效高效的计算框架,即压缩封闭式记录,以统一和解释现代深层次网络和许多人工情报实践的演变。虽然我们主要以模拟视觉数据为例,但我们认为这两项原则将统一对自主智能系统的广大家庭的理解,并为理解大脑提供一个框架。