Large Language Models (LLMs) especially ChatGPT have produced impressive results in various areas, but their potential human-like psychology is still largely unexplored. Existing works study the virtual personalities of LLMs but rarely explore the possibility of analyzing human personalities via LLMs. This paper presents a generic evaluation framework for LLMs to assess human personalities based on Myers Briggs Type Indicator (MBTI) tests. Specifically, we first devise unbiased prompts by randomly permuting options in MBTI questions and adopt the average testing result to encourage more impartial answer generation. Then, we propose to replace the subject in question statements to enable flexible queries and assessments on different subjects from LLMs. Finally, we re-formulate the question instructions in a manner of correctness evaluation to facilitate LLMs to generate clearer responses. The proposed framework enables LLMs to flexibly assess personalities of different groups of people. We further propose three evaluation metrics to measure the consistency, robustness, and fairness of assessment results from state-of-the-art LLMs including ChatGPT and InstructGPT. Our experiments reveal ChatGPT's ability to assess human personalities, and the average results demonstrate that it can achieve more consistent and fairer assessments in spite of lower robustness against prompt biases compared with InstructGPT.
翻译:大量语言模型(LLMS),特别是ChatGPT(ChatGPT),在各个领域产生了令人印象深刻的成果,但其潜在的人类心理学基本上尚未探索。现有工作研究LLMM的虚拟人物,但很少探索通过LLMM分析人性的可能性。本文件为LLMS根据Myers Briggs 类型指标(MBTI)测试评估人性提供了一个通用的评价框架。具体地说,我们首先在MBTI问题中随机地制定不偏颇的提示,采用平均测试结果,鼓励更公正地生成答案。然后,我们提议替换问题说明的主题陈述,以便能够对LLMS的不同课题进行灵活的查询和评估。最后,我们以正确性评估的方式重新拟订问题指示,以便利LMMS产生更明确的反应。拟议的框架使LMMS能够灵活地评估不同群体的人性。我们进一步提出三项评价指标,以衡量来自最新技术LMS(包括ChateGPT和教GPT)的评估结果的一致性、稳性和公平性。我们的实验显示,尽管对人性有较强的判断力,但能更精确地加以比较可靠地证明。</s>