Ten years into the revival of deep networks and artificial intelligence, we propose a theoretical framework that sheds light on understanding deep networks within a bigger picture of Intelligence in general. We introduce two fundamental principles, Parsimony and Self-consistency, that we believe to be cornerstones for the emergence of Intelligence, artificial or natural. While these two principles have rich classical roots, we argue that they can be stated anew in entirely measurable and computable ways. More specifically, the two principles lead to an effective and efficient computational framework, compressive closed-loop transcription, that unifies and explains the evolution of modern deep networks and many artificial intelligence practices. While we mainly use modeling of visual data as an example, we believe the two principles will unify understanding of broad families of autonomous intelligent systems and provide a framework for understanding the brain.
翻译:在恢复深层网络和人工智能十年后,我们提出了一个理论框架,在总体情报的大图中,揭示对深层网络的理解。我们引入了两项基本原则,即“共鸣”和“自我一致性 ”, 我们认为这两项原则是出现人工或自然情报的基石。虽然这两项原则有着丰富的传统根源,但我们认为,这两项原则可以用完全可计量和可计算的方式重新表述。更具体地说,这两项原则导致一个高效高效的计算框架,即压缩闭路记录,以统一和解释现代深层网络和许多人工情报实践的演变。我们主要以视觉数据模型为例,但我们认为这两项原则将统一对自主智能系统大家庭的理解,并为理解大脑提供一个框架。