Memory is fundamental to large language model (LLM)-based agents, but existing surveys emphasize application-level use (e.g., personalized dialogue), while overlooking the atomic operations governing memory dynamics. This work categorizes memory into parametric (implicit in model weights) and contextual (explicit external data, structured/unstructured) forms, and defines six core operations: Consolidation, Updating, Indexing, Forgetting, Retrieval, and Condensation. Mapping these dimensions reveals four key research topics: long-term, long-context, parametric modification, and multi-source memory. The taxonomy provides a structured view of memory-related research, benchmarks, and tools, clarifying functional interactions in LLM-based agents and guiding future advancements. The datasets, papers, and tools are publicly available at https://github.com/Elvin-Yiming-Du/Survey_Memory_in_AI.
翻译:记忆是基于大型语言模型(LLM)的智能体的基础,但现有综述多强调应用层面的使用(例如个性化对话),而忽视了支配记忆动态的原子级操作。本研究将记忆分为参数化(隐含于模型权重中)与情境化(显式的外部数据,结构化/非结构化)两种形式,并定义了六项核心操作:巩固、更新、索引、遗忘、检索与凝练。通过映射这些维度,揭示了四个关键研究方向:长期记忆、长上下文记忆、参数修改记忆与多源记忆。该分类体系为记忆相关研究、基准测试及工具提供了结构化视角,阐明了基于LLM的智能体中各功能模块的交互关系,并为未来发展指明了方向。相关数据集、论文及工具已公开于 https://github.com/Elvin-Yiming-Du/Survey_Memory_in_AI。