Recent advances in machine learning have made it possible to train artificially intelligent agents that perform with super-human accuracy on a great diversity of complex tasks. However, the process of training these capabilities often necessitates millions of annotated examples -- far more than humans typically need in order to achieve a passing level of mastery on similar tasks. Thus, while contemporary methods in machine learning can produce agents that exhibit super-human performance, their rate of learning per opportunity in many domains is decidedly lower than human-learning. In this work we formalize a theory of Decomposed Inductive Procedure Learning (DIPL) that outlines how different forms of inductive symbolic learning can be used in combination to build agents that learn educationally relevant tasks such as mathematical, and scientific procedures, at a rate similar to human learners. We motivate the construction of this theory along Marr's concepts of the computational, algorithmic, and implementation levels of cognitive modeling, and outline at the computational-level six learning capacities that must be achieved to accurately model human learning. We demonstrate that agents built along the DIPL theory are amenable to satisfying these capacities, and demonstrate, both empirically and theoretically, that DIPL enables the creation of agents that exhibit human-like learning performance.
翻译:机器学习的最近进展使得有可能对超人精准地完成多种复杂任务的人造智能剂进行培训。然而,培训这些能力的过程往往需要数百万个附加说明的例子 -- -- 远远超出人类通常需要的,以达到类似任务的掌握水平。因此,虽然现代机器学习方法可以产生超人性表现的代理,但在许多领域,他们的每个机会的学习速度肯定低于人类学习。在这项工作中,我们正式确定了一个分解入门程序学习理论(DIPL),该理论概述了不同形式的感化象征性学习可如何结合使用,以学习数学和科学程序等与教育有关的任务,其速度与人类学习者相似。我们鼓励按照Marr的计算、算法和执行认知模型等概念来构建这一理论,并在计算六级学习能力上概要说明必须实现的精确模拟人类学习。我们证明,按照DIPL理论构建的代理机构能够满足这些能力,并且以经验上和理论上的方式显示,DIPL能够使人类的创建者学习表现。