This paper introduces a framework for assessing the reliability of Large Language Model (LLM) text annotations in social science research by adapting established survey methodology principles. Drawing parallels between survey respondent behavior and LLM outputs, the study implements three key interventions: option randomization, position randomization, and reverse validation. While traditional accuracy metrics may mask model instabilities, particularly in edge cases, our framework provides a more comprehensive reliability assessment. Using the F1000 dataset in biomedical science and three sizes of Llama models (8B, 70B, and 405B parameters), the paper demonstrates that these survey-inspired interventions can effectively identify unreliable annotations that might otherwise go undetected through accuracy metrics alone. The results show that 5-25% of LLM annotations change under these interventions, with larger models exhibiting greater stability. Notably, for rare categories approximately 50% of "correct" annotations demonstrate low reliability when subjected to this framework. The paper introduce an information-theoretic reliability score (R-score) based on Kullback-Leibler divergence that quantifies annotation confidence and distinguishes between random guessing and meaningful annotations at the case level. This approach complements existing expert validation methods by providing a scalable way to assess internal annotation reliability and offers practical guidance for prompt design and downstream analysis.
翻译:暂无翻译