Large Language Models (LLMs), especially those accessed via APIs, have demonstrated impressive capabilities across various domains. However, users without technical expertise often turn to (untrustworthy) third-party services, such as prompt engineering, to enhance their LLM experience, creating vulnerabilities to adversarial threats like backdoor attacks. Backdoor-compromised LLMs generate malicious outputs to users when inputs contain specific "triggers" set by attackers. Traditional defense strategies, originally designed for small-scale models, are impractical for API-accessible LLMs due to limited model access, high computational costs, and data requirements. To address these limitations, we propose Chain-of-Scrutiny (CoS) which leverages LLMs' unique reasoning abilities to mitigate backdoor attacks. It guides the LLM to generate reasoning steps for a given input and scrutinizes for consistency with the final output -- any inconsistencies indicating a potential attack. It is well-suited for the popular API-only LLM deployments, enabling detection at minimal cost and with little data. User-friendly and driven by natural language, it allows non-experts to perform the defense independently while maintaining transparency. We validate the effectiveness of CoS through extensive experiments on various tasks and LLMs, with results showing greater benefits for more powerful LLMs.
翻译:大型语言模型(LLMs),特别是通过API访问的模型,已在多个领域展现出卓越的能力。然而,缺乏技术专业知识的用户常依赖(不可信的)第三方服务(如提示工程)来提升LLM使用体验,这使其易受后门攻击等对抗性威胁。被植入后门的LLMs在输入包含攻击者设定的特定“触发器”时,会向用户生成恶意输出。传统防御策略最初为小规模模型设计,由于模型访问受限、计算成本高昂及数据需求大,不适用于API可访问的LLMs。为应对这些局限,我们提出链式审查(CoS),利用LLMs独特的推理能力来缓解后门攻击。该方法引导LLM为给定输入生成推理步骤,并审查其与最终输出的一致性——任何不一致均可能指示潜在攻击。它非常适合当前流行的仅API部署的LLMs,能以极低成本、少量数据实现检测。该方法用户友好且基于自然语言驱动,使非专家用户能独立执行防御,同时保持透明度。我们通过在多种任务和LLMs上的广泛实验验证了CoS的有效性,结果表明其对更强大的LLMs具有更显著的益处。