Glitch tokens in Large Language Models (LLMs) can trigger unpredictable behaviors, threatening model reliability and safety. Existing detection methods often depend on predefined patterns, limiting their adaptability across diverse LLM architectures. We propose GlitchMiner, a gradient-based discrete optimization framework that efficiently identifies glitch tokens by leveraging entropy to quantify prediction uncertainty and a local search strategy for exploring the token space. Experiments across multiple LLM architectures show that GlitchMiner outperforms existing methods in both detection accuracy and adaptability, achieving over 10% average efficiency improvement. GlitchMiner enhances vulnerability assessment in LLMs, contributing to more robust and reliable applications. Code is available at https://github.com/wooozihui/GlitchMiner.
翻译:暂无翻译