Strong AI requires the learning engine to be task non-specific and to automatically construct a dynamic hierarchy of internal features. By hierarchy, we mean, e.g., short road edges and short bush edges amount to intermediate features of landmarks; but intermediate features from tree shadows are distractors that must be disregarded by the high-level landmark concept. By dynamic, we mean the automatic selection of features while disregarding distractors is not static, but instead based on dynamic statistics (e.g. because of the instability of shadows in the context of landmark). By internal features, we mean that they are not only sensory, but also motor, so that context from motor (state) integrates with sensory inputs to become a context-based logic machine. We present why strong AI is necessary for any practical AI systems that work reliably in the real world. We then present a new generation of Developmental Networks 2 (DN-2). With many new novelties beyond DN-1, the most important novelty of DN-2 is that the inhibition area of each internal neuron is neuron-specific and dynamic. This enables DN-2 to automatically construct an internal hierarchy that is fluid, whose number of areas is not static as in DN-1. To optimally use the limited resource available, we establish that DN-2 is optimal in terms of maximum likelihood, under the condition of limited learning experience and limited resources. We also present how DN-2 can learn an emergent Universal Turing Machine (UTM). Together with the optimality, we present the optimal UTM. Experiments for real-world vision-based navigation, maze planning, and audition used DN-2. They successfully showed that DN-2 is for general purposes using natural and synthetic inputs. Their automatically constructed internal representation focuses on important features while being invariant to distractors and other irrelevant context-concepts.
翻译:强大的 AI 要求学习引擎必须是非特定的任务, 并自动构建动态的内部级级。 我们从等级上看, 我们的意思是, 例如, 短路边缘和短灌林边缘相当于地标的中间特征; 但是树影的中间特征是转移的动力, 必须被高层次的地标概念所忽视。 我们从动态上看, 自动选择特征而忽略分散器并不是静止的, 而是基于动态统计( 例如, 在标志性的背景下, 阴影不稳定, 并自动构建动态的内部结构 ) 。 根据内部特征, 我们意味着它们不仅是感官, 而且还有运动运动, 因此, 运动( 状态) 与感官输入的不相适应, 成为基于背景的逻辑机器。 我们提出为什么任何实用的AI系统都需要强大的AI, 在现实世界中运行可靠。 我们提出新一代的开发网络 2 ( DN-2) 。 在D-1 之外, DN-2 最重要的新事物是, 我们每个内部神经和动态的抑制领域是神经特性和动态的。 这样D-2 使得D-2 自动自动构建一个最优化的系统, 在目前, 使用最优化的系统上, 正在使用一个最精确的学习中, 在最优化的D 最优化的路径中, 我们使用一个最精确的D- d 使用的资源领域, 正在使用一个最精确的周期的路径中, 正在使用一个最精确的路径是使用一个最精确的系统, 。