With the wide application of Large Language Models (LLMs) such as ChatGPT, how to make the contents generated by LLM accurate and credible becomes very important, especially in complex knowledge-intensive tasks. In this paper, we propose a novel framework called Search-in-the-Chain (SearChain) to improve the accuracy, credibility and traceability of LLM-generated content for multi-hop question answering, which is a typical complex knowledge-intensive task. SearChain is a framework that deeply integrates LLM and information retrieval (IR). In SearChain, LLM constructs a chain-of-query, which is the decomposition of the multi-hop question. Each node of the chain is a query-answer pair consisting of an IR-oriented query and the answer generated by LLM for this query. IR verifies, completes, and traces the information of each node of the chain, so as to guide LLM to construct the correct chain-of-query, and finally answer the multi-hop question. SearChain makes LLM change from trying to give a answer to trying to construct the chain-of-query when faced with the multi-hop question, which can stimulate the knowledge-reasoning ability and provides the interface for IR to be deeply involved in reasoning process of LLM. IR interacts with each node of chain-of-query of LLM. It verifies the information of the node and provides the unknown knowledge to LLM, which ensures the accuracy of the whole chain in the process of LLM generating the answer. Besides, the contents returned by LLM to the user include not only the final answer but also the reasoning process for the question, that is, the chain-of-query and the supporting documents retrieved by IR for each node of the chain, which improves the credibility and traceability of the contents generated by LLM. Experimental results show SearChain outperforms related baselines on four multi-hop question-answering datasets.
翻译:暂无翻译