Software's effect upon the world hinges upon the hardware that interprets it. This tends not to be an issue, because we standardise hardware. AI is typically conceived of as a software ``mind'' running on such interchangeable hardware. This formalises mind-body dualism, in that a software ``mind'' can be run on any number of standardised bodies. While this works well for simple applications, we argue that this approach is less than ideal for the purposes of formalising artificial general intelligence (AGI) or artificial super-intelligence (ASI). The general reinforcement learning agent AIXI is pareto optimal. However, this claim regarding AIXI's performance is highly subjective, because that performance depends upon the choice of interpreter. We examine this problem and formulate an approach based upon enactive cognition and pancomputationalism to address the issue. Weakness is a measure of plausibility, a ``proxy for intelligence'' unrelated to compression or simplicity. If hypotheses are evaluated in terms of weakness rather than length, then we are able to make objective claims regarding performance (how effectively one adapts, or ``generalises'' from limited information). Subsequently, we propose a definition of AGI which is objectively optimal given a ``vocabulary'' (body etc) in which cognition is enacted, and of ASI as that which finds the optimal vocabulary for a purpose and then constructs an AGI.
翻译:软件对世界的影响取决于对它进行解释的硬件。 这往往不是一个问题, 因为我们标准化了硬件。 AI通常被设想为一个软件“ mind' ” 运行在这种可互换的硬件上。 这种形式化了思维- 体的双重性。 因为它可以运行在任何数量的标准化机构。 这个软件“mind' ” 可以运行在任何数量的标准化机构上。 虽然对于简单的应用程序来说效果良好, 但是我们争辩说, 这种方法对于将人造一般智能( AGI) 或人工超级智能( ASI) 正规化来说并不理想。 通用强化学习代理 AIXI 是相当最佳的。 然而, 有关AIXI 性表现的主张是高度主观的, 因为这种表现取决于翻译的选择。 我们研究这个问题, 并基于任何定型认知和截断论来制定一种方法来解决这个问题。 微调是一种衡量可度的尺度, 即情报的最佳性能与压缩或简单性能无关。 如果对虚伪而不是长的评价, 那么我们就可以对AXI的性判断做出客观的主张, 和精确的判断。</s>