Scaling up data, parameters, and test-time computation has been the mainstream methods to improve LLM systems (LLMsys), but their upper bounds are almost reached due to the gradual depletion of high-quality data and marginal gains obtained from larger computational resource consumption. Inspired by the abilities of human and traditional AI systems in learning from practice, constructing memory and continual learning frameworks for LLMsys has become an important and popular research direction in recent literature. Yet, existing benchmarks for LLM memory often focus on evaluating the system on homogeneous reading comprehension tasks with long-form inputs rather than testing their abilities to learn from accumulated user feedback in service time. Therefore, we propose a user feedback simulation framework and a comprehensive benchmark covering multiple domains, languages, and types of tasks to evaluate the continual learning abilities of LLMsys. Experiments show that the effectiveness and efficiency of state-of-the-art baselines are far from satisfying, and we hope this benchmark could pave the way for future studies on LLM memory and optimization algorithms.
翻译:扩展数据规模、参数数量及测试时计算量一直是提升大语言模型系统(LLMsys)的主流方法,然而由于高质量数据的逐渐耗尽以及更大计算资源消耗带来的边际收益递减,这些方法的提升上限已接近触及。受人类及传统人工智能系统从实践中学习的能力启发,为LLMsys构建记忆与持续学习框架已成为近年文献中重要且热门的研究方向。然而,现有的大语言模型记忆基准多聚焦于评估系统在长文本输入的同质阅读理解任务上的表现,而非测试其在服务期间从累积用户反馈中学习的能力。为此,我们提出了一个用户反馈模拟框架及一个覆盖多领域、多语言、多任务类型的综合性基准,用以评估LLMsys的持续学习能力。实验表明,当前先进基线方法的效能与效率远未达到理想水平,我们期望该基准能为未来大语言模型记忆与优化算法的研究铺平道路。