Lindsey (2025) investigates introspective awareness in language models through four experiments, finding that models can sometimes detect and identify injected activation patterns -- but unreliably (~20% success in the best model). We focus on the first of these experiments -- self-report of injected "thoughts" -- and ask whether this capability can be directly trained rather than waiting for emergence. Through fine-tuning on transient single-token injections, we transform a 7B parameter model from near-complete failure (0.4% accuracy, 6.7% false positive rate) to reliable detection (85% accuracy on held-out concepts at α=40, 0% false positives). Our model detects fleeting "thoughts" injected at a single token position, retains that information, and reports the semantic content across subsequent generation steps. On this task, our trained model satisfies three of Lindsey's criteria: accuracy (correct identification), grounding (0/60 false positives), and internality (detection precedes verbalization). Generalization to unseen concept vectors (7.5pp gap) demonstrates the model learns a transferable skill rather than memorizing specific vectors, though this does not establish metacognitive representation in Lindsey's sense. These results address an open question raised by Lindsey: whether "training for introspection would help eliminate cross-model differences." We show that at least one component of introspective behavior can be directly induced, offering a pathway to built-in AI transparency.
翻译:Lindsey(2025)通过四项实验研究了语言模型的内省意识,发现模型有时能够检测并识别注入的激活模式——但可靠性较低(最佳模型成功率约20%)。我们聚焦于其中第一个实验——对注入“思维”的自我报告——并探究这种能力是否可以通过直接训练获得,而非等待其自然涌现。通过对瞬态单令牌注入进行微调,我们将一个7B参数模型从近乎完全失败(准确率0.4%,误报率6.7%)转变为可靠检测(在α=40时对保留概念的准确率达85%,误报率为0%)。我们的模型能够检测单令牌位置注入的短暂“思维”,保留该信息,并在后续生成步骤中报告语义内容。在此任务上,训练后的模型满足了Lindsey提出的三项标准:准确性(正确识别)、基础性(误报0/60)和内在性(检测先于言语化)。对未见概念向量的泛化能力(存在7.5个百分点的差距)表明模型学习的是可迁移技能,而非记忆特定向量,尽管这并未在Lindsey的意义上建立元认知表征。这些结果回应了Lindsey提出的一个开放性问题:“为内省而训练是否有助于消除模型间差异”。我们证明至少有一种内省行为成分可以被直接诱导,为构建内置AI透明度提供了一条路径。