An artificial general intelligence (AGI), by one definition, is an agent that requires less information than any other to make an accurate prediction. It is arguable that the general reinforcement learning agent AIXI not only met this definition, but was the only mathematical formalism to do so. Though a significant result, AIXI was incomputable and its performance subjective. This paper proposes an alternative formalism of AGI which overcomes both problems. Formal proof of its performance is given, along with a simple implementation and experimental results that support these claims.
翻译:根据一个定义,人造一般情报(AGI)是一种代理人,它比任何其他机构更不需要信息来作出准确的预测,可以说,一般强化学习代理AXI不仅符合这一定义,而且是这样做的唯一数学形式主义。尽管这是一个重要的结果,但AXI是无可辩驳的,其性能主观性。本文提出了AGI克服这两个问题的另一种形式主义。提供了其表现的正式证明,以及支持这些主张的简单执行和实验结果。