The field of AI alignment is concerned with AI systems that pursue unintended goals. One commonly studied mechanism by which an unintended goal might arise is specification gaming, in which the designer-provided specification is flawed in a way that the designers did not foresee. However, an AI system may pursue an undesired goal even when the specification is correct, in the case of goal misgeneralization. Goal misgeneralization is a specific form of robustness failure for learning algorithms in which the learned program competently pursues an undesired goal that leads to good performance in training situations but bad performance in novel test situations. We demonstrate that goal misgeneralization can occur in practical systems by providing several examples in deep learning systems across a variety of domains. Extrapolating forward to more capable systems, we provide hypotheticals that illustrate how goal misgeneralization could lead to catastrophic risk. We suggest several research directions that could reduce the risk of goal misgeneralization for future systems.
翻译:AI调整领域涉及追求意想不到目标的AI系统。通常研究的一个机制是规格游戏,其中设计师提供的规格存在设计师没有预见到的缺陷。然而,即使规格正确,如果目标分布不当,AI系统也可能追求一个不理想的目标。目标分类错误是学习算法缺乏稳健性的具体形式,学习算法的学习程序能够追求一个不理想的目标,在培训情况下导致良好业绩,而在新测试情况下导致不良业绩。我们通过提供不同领域的深层次学习系统的若干实例,证明目标概括化在实际系统中可能发生错误。向更有能力的系统外推,我们提供假设,说明目标概括化如何导致灾难性风险。我们提出若干研究方向,可以降低目标概括化对未来系统的风险。