Large Language Models (LLMs) have demonstrated human-like intelligence and are widely used in various applications. However, LLMs still exhibit various kinds of inconsistency problems. Existing works mainly focus on the inconsistency issues within a single LLM, while we investigate the inter-consistency among multiple LLMs, which is critical for collaborating to solve a complex task. To examine whether LLMs can collaborate to ultimately achieve a consensus for the shared goal and whether LLMs easily change their viewpoints, we introduce a Formal Debate framework (FORD) With FORD, we conduct a three-stage debate aligned with real-world scenarios: fair debate, mismatched debate, and roundtable debate. Through extensive experiments on the commonsense reasoning task, LLMs not only become more inter-consistent but also achieve higher performance. Moreover, we observe that stronger LLMs tend to dominate the debates by adhering to their perspectives, while weaker ones are more likely to change viewpoints. Additionally, we highlight the importance of a competent judge, such as GPT-4, to draw more proper conclusions. Our work contributes to understanding the inter-consistency among LLMs and lays the foundation for the development of future collaboration methods.
翻译:暂无翻译