Algorithmic Information Theory has inspired intractable constructions of general intelligence (AGI), and undiscovered tractable approximations are likely feasible. Reinforcement Learning (RL), the dominant paradigm by which an agent might learn to solve arbitrary solvable problems, gives an agent a dangerous incentive: to gain arbitrary "power" in order to intervene in the provision of their own reward. We review the arguments that generally intelligent algorithmic-information-theoretic reinforcement learners such as Hutter's (2005) AIXI would seek arbitrary power, including over us. Then, using an information-theoretic exploration schedule, and a setup inspired by causal influence theory, we present a variant of AIXI which learns to not seek arbitrary power; we call it "unambitious". We show that our agent learns to accrue reward at least as well as a human mentor, while relying on that mentor with diminishing probability. And given a formal assumption that we probe empirically, we show that eventually, the agent's world-model incorporates the following true fact: intervening in the "outside world" will have no effect on reward acquisition; hence, it has no incentive to shape the outside world.
翻译:解析信息理论启发了一般情报(AGI)的棘手构思,而未发现的可移植近似可能是可行的。强化学习(RL)是代理人学习解决任意可溶问题的主导范式,它给代理人一种危险的激励:获得任意的“权力”以干预提供自己的奖赏。我们审视了一般智能算法-信息理论强化学习者(如Hutter's(2005年) AIXI)会寻求任意权力(包括对我们)的论点。然后,利用信息理论探索时间表和由因果关系理论启发的构思,我们展示了AXI的变式,学会不寻求任意权力;我们称它为“不雄心勃勃 ” 。 我们表明我们的代理人学会了至少和人类导师的奖赏,同时依赖导师的概率越来越小。我们正式假设我们用经验来探究,我们最终显示,该代理人的世界模型包含以下事实:在“外部世界”的干预不会对奖赏获取产生任何影响;因此,它没有激励机制来塑造外部世界。