Monitoring Large Language Model (LLM) outputs is crucial for mitigating risks from misuse and misalignment. However, LLMs could evade monitoring through steganography: Encoding hidden information within seemingly benign generations. In this paper, we evaluate the steganography capabilities in frontier LLMs to better understand the risk they pose. We focus on two types of steganography: passing encoded messages and performing encoded reasoning. We find that current models are unable to encode short messages in their outputs without a monitor noticing under standard affordances. They can succeed, however, if given additional affordances like using an unmonitored scratchpad and coordinating on what encoding scheme to use. We additionally find early signs that models can perform basic encoded reasoning in a simple state-tracking problem. This includes some ability to reason with their own and pre-defined schemes, including encoding schemes such as Hexadecimal. Despite this, they can rarely hide reasoning subtly within a cover task to fool a monitor. Overall, our results indicate that current LLMs exhibit nascent steganographic capabilities. While these capabilities are likely insufficient to bypass well-designed monitors at present, this could change in the future.
翻译:监控大语言模型(LLM)的输出对于减轻误用和错位带来的风险至关重要。然而,LLM可能通过隐写术逃避监控:在看似无害的生成文本中编码隐藏信息。本文评估了前沿LLM的隐写能力,以更好地理解其构成的风险。我们重点关注两类隐写术:传递编码信息和执行编码推理。研究发现,在标准条件下,当前模型无法在其输出中编码短信息而不被监控器察觉。但若获得额外条件,例如使用未受监控的草稿纸并就编码方案进行协调,它们则可能成功。我们还发现了早期迹象,表明模型能在简单的状态跟踪问题中执行基本的编码推理。这包括使用自定义和预定义方案(如十六进制编码方案)进行推理的初步能力。尽管如此,它们很少能将推理过程巧妙地隐藏在掩护任务中以欺骗监控器。总体而言,我们的结果表明当前LLM展现出初级的隐写能力。虽然这些能力目前可能不足以绕过设计良好的监控系统,但未来情况可能发生变化。