Enabling Large Language Models (LLMs) to reliably invoke external tools remains a critical bottleneck for autonomous agents. Existing approaches suffer from three fundamental challenges: expensive human annotation for high-quality trajectories, poor generalization to unseen tools, and quality ceilings inherent in single-model synthesis that perpetuate biases and coverage gaps. We introduce InfTool, a fully autonomous framework that breaks these barriers through self-evolving multi-agent synthesis. Given only raw API specifications, InfTool orchestrates three collaborative agents (User Simulator, Tool-Calling Assistant, and MCP Server) to generate diverse, verified trajectories spanning single-turn calls to complex multi-step workflows. The framework establishes a closed loop: synthesized data trains the model via Group Relative Policy Optimization (GRPO) with gated rewards, the improved model generates higher-quality data targeting capability gaps, and this cycle iterates without human intervention. Experiments on the Berkeley Function-Calling Leaderboard (BFCL) demonstrate that InfTool transforms a base 32B model from 19.8% to 70.9% accuracy (+258%), surpassing models 10x larger and rivaling Claude-Opus, and entirely from synthetic data without human annotation.
翻译:使大型语言模型(LLMs)可靠地调用外部工具仍然是实现自主智能体的关键瓶颈。现有方法存在三个根本性挑战:高质量轨迹需要昂贵的人工标注、对未见工具的泛化能力差,以及单模型合成固有的质量上限,这些问题会持续导致偏见和覆盖范围不足。我们提出了InfTool,一个完全自主的框架,通过自演进的多智能体合成来突破这些障碍。仅需原始API规范,InfTool便协调三个协作智能体(用户模拟器、工具调用助手和MCP服务器)来生成多样化且经过验证的轨迹,涵盖从单轮调用到复杂多步工作流。该框架建立了一个闭合循环:合成数据通过带门控奖励的组相对策略优化(GRPO)训练模型,改进后的模型生成更高质量的数据以针对能力缺口,此循环无需人工干预即可迭代进行。在伯克利函数调用排行榜(BFCL)上的实验表明,InfTool将基础32B模型的准确率从19.8%提升至70.9%(+258%),超越了规模大10倍的模型,并与Claude-Opus相媲美,且完全基于合成数据,无需人工标注。