As modern networks grow in scale and complexity, manual configuration becomes increasingly inefficient and prone to human error. While intent-driven self-configuration using large language models has shown significant promise, such models remain computationally expensive, resource-intensive, and often raise privacy concerns because they typically rely on external cloud infrastructure. This work introduces SLM_netconfig, a fine-tuned small language model framework that uses an agent-based architecture and parameter-efficient adaptation techniques to translate configuration intents expressed as natural language requirements or questions into syntactically and semantically valid network configurations. The system is trained on a domain-specific dataset generated through a pipeline derived from vendor documentation, ensuring strong alignment with real-world configuration practices. Extensive evaluation shows that SLM_netconfig, when using its question-to-configuration model, achieves higher syntactic accuracy and goal accuracy than LLM-NetCFG while substantially reducing translation latency and producing concise, interpretable configurations. These results demonstrate that fine-tuned small language models, as implemented in SLM_netconfig, can deliver efficient, accurate, and privacy-preserving automated configuration generation entirely on-premise, making them a practical and scalable solution for modern autonomous network configuration.
翻译:随着现代网络规模和复杂性的增长,手动配置日益低效且易受人为错误影响。尽管使用大型语言模型的意图驱动自配置已展现出显著潜力,但此类模型仍存在计算成本高、资源密集的问题,且通常因依赖外部云基础设施而引发隐私担忧。本研究提出SLM_netconfig,一种基于微调小型语言模型的框架,采用基于智能体的架构和参数高效适配技术,将自然语言需求或问题表达的配置意图转换为语法和语义均有效的网络配置。该系统通过从供应商文档衍生的流程生成的领域特定数据集进行训练,确保与实际配置实践高度契合。大量评估表明,SLM_netconfig在使用其问题到配置模型时,相比LLM-NetCFG实现了更高的语法准确率和目标准确率,同时显著降低了翻译延迟,并生成简洁、可解释的配置。这些结果表明,如SLM_netconfig所实现的微调小型语言模型,能够完全在本地环境中提供高效、准确且保护隐私的自动化配置生成,为现代自主网络配置提供实用且可扩展的解决方案。