The increasing capabilities of artificial intelligence (AI) systems make it ever more important that we interpret their internals to ensure that their intentions are aligned with human values. Yet there is reason to believe that misaligned artificial intelligence will have a convergent instrumental incentive to make its thoughts difficult for us to interpret. In this article, I discuss many ways that a capable AI might circumvent scalable interpretability methods and suggest a framework for thinking about these potential future risks.
翻译:人造情报系统(AI)能力不断增强,因此,我们更有必要解释其内部内容,以确保其意图符合人类价值观;然而,有理由相信,错误的人工情报将具有趋同的动力,使我们难以理解其想法。 在本条中,我讨论了有能力的AI可能绕过可伸缩的解释方法的许多方法,并提出了一个思考这些未来潜在风险的框架。