Auditing large language models for unexpected behaviors is critical to preempt catastrophic deployments, yet remains challenging. In this work, we cast auditing as an optimization problem, where we automatically search for input-output pairs that match a desired target behavior. For example, we might aim to find a non-toxic input that starts with "Barack Obama" that a model maps to a toxic output. This optimization problem is difficult to solve as the set of feasible points is sparse, the space is discrete, and the language models we audit are non-linear and high-dimensional. To combat these challenges, we introduce a discrete optimization algorithm, ARCA, that jointly and efficiently optimizes over inputs and outputs. Our approach automatically uncovers derogatory completions about celebrities (e.g. "Barack Obama is a legalized unborn" -> "child murderer"), produces French inputs that complete to English outputs, and finds inputs that generate a specific name. Our work offers a promising new tool to uncover models' failure-modes before deployment.
翻译:对意外行为的大语言模式进行审计对于预先预防灾难性部署至关重要,但依然具有挑战性。在这项工作中,我们将审计作为一个优化问题,自动寻找符合理想目标行为的投入-产出对。例如,我们可能寻求一种无毒的投入,从“Barack Obama”开始,以“Barack Obama”为模型绘制有毒产出的地图。这一优化问题很难解决,因为一系列可行的点很少,空间是分散的,而我们审计的语言模式是非线性和高维的。为了应对这些挑战,我们引入了一种离散的优化算法(ARCA),即对投入和产出进行联合和高效的优化。我们的方法会自动发现对名人(例如“Barack Obama是合法出生的未出生婴儿” - > “儿童杀手”)的贬损性完成,产生对英文产出完整的法国投入,并找到产生具体名字的投入。我们的工作为发现模型的失败模式提供了充满希望的新工具。</s>