This paper presents a system for procedurally generating agent-based narratives using large language models (LLMs). Users could drag and drop multiple agents and objects into a scene, with each entity automatically assigned semantic metadata describing its identity, role, and potential interactions. The scene structure is then serialized into a natural language prompt and sent to an LLM, which returns a structured string describing a sequence of actions and interactions among agents and objects. The returned string encodes who performed which actions, when, and how. A custom parser interprets this string and triggers coordinated agent behaviors, animations, and interaction modules. The system supports agent-based scenes, dynamic object manipulation, and diverse interaction types. Designed for ease of use and rapid iteration, the system enables the generation of virtual agent activity suitable for prototyping agent narratives. The performance of the developed system was evaluated using four popular lightweight LLMs. Each model's process and response time were measured under multiple complexity scenarios. The collected data were analyzed to compare consistency across the examined scenarios and to highlight the relative efficiency and suitability of each model for procedural agent-based narratives generation. The results demonstrate that LLMs can reliably translate high-level scene descriptions into executable agent-based behaviors.
翻译:本文提出了一种利用大语言模型(LLM)程序化生成智能体叙事系统的系统。用户可将多个智能体与对象拖放至场景中,每个实体会自动获得描述其身份、角色及潜在交互的语义元数据。随后,场景结构被序列化为自然语言提示并发送至LLM,LLM将返回一个结构化字符串,描述智能体与对象之间的一系列动作与交互。该返回字符串编码了动作执行者、执行时间及执行方式。定制解析器会解读此字符串并触发协调的智能体行为、动画及交互模块。本系统支持智能体场景、动态对象操控及多样化的交互类型。系统设计注重易用性与快速迭代,能够生成适用于智能体叙事原型设计的虚拟智能体活动。我们使用四种流行的轻量化LLM对系统性能进行了评估。在多种复杂度场景下,测量了各模型的处理与响应时间。通过对采集数据的分析,比较了不同场景下的一致性,并突出了各模型在程序化智能体叙事生成方面的相对效率与适用性。结果表明,LLM能够可靠地将高层级场景描述转化为可执行的智能体行为。