The rapid evolution of Large Language Models (LLMs) has rendered them indispensable in modern society. While security measures are typically in place to align LLMs with human values prior to release, recent studies have unveiled a concerning phenomenon named "jailbreak." This term refers to the unexpected and potentially harmful responses generated by LLMs when prompted with malicious questions. Existing research focuses on generating jailbreak prompts but our study aim to answer a different question: Is the system message really important to jailbreak in LLMs? To address this question, we conducted experiments in a stable GPT version gpt-3.5-turbo-0613 to generated jailbreak prompts with varying system messages: short, long, and none. We discover that different system messages have distinct resistances to jailbreak by experiments. Additionally, we explore the transferability of jailbreak across LLMs. This finding underscores the significant impact system messages can have on mitigating LLMs jailbreak. To generate system messages that are more resistant to jailbreak prompts, we propose System Messages Evolutionary Algorithms (SMEA). Through SMEA, we can get robust system messages population that demonstrate up to 98.9% resistance against jailbreak prompts. Our research not only bolsters LLMs security but also raises the bar for jailbreak, fostering advancements in this field of study.
翻译:暂无翻译