Traditional control interfaces for quadruped robots often impose a high barrier to entry, requiring specialized technical knowledge for effective operation. To address this, this paper presents a novel control framework that integrates Large Language Models (LLMs) to enable intuitive, natural language-based navigation. We propose a distributed architecture where high-level instruction processing is offloaded to an external server to overcome the onboard computational constraints of the DeepRobotics Jueying Lite 3 platform. The system grounds LLM-generated plans into executable ROS navigation commands using real-time sensor fusion (LiDAR, IMU, and Odometry). Experimental validation was conducted in a structured indoor environment across four distinct scenarios, ranging from single-room tasks to complex cross-zone navigation. The results demonstrate the system's robustness, achieving an aggregate success rate of over 90\% across all scenarios, validating the feasibility of offloaded LLM-based planning for autonomous quadruped deployment in real-world settings.


翻译:传统四足机器人的控制接口通常存在较高的使用门槛,需要操作者具备专业技术知识才能有效操控。为解决这一问题,本文提出了一种集成大型语言模型(LLMs)的新型控制框架,以实现基于自然语言的直观导航。我们设计了一种分布式架构,将高层指令处理任务卸载至外部服务器,以克服DeepRobotics Jueying Lite 3平台在机载计算能力上的限制。该系统通过实时传感器融合(LiDAR、IMU与里程计)将LLM生成的规划落地为可执行的ROS导航指令。实验验证在一个结构化的室内环境中进行,涵盖了从单房间任务到复杂跨区域导航的四种不同场景。结果表明该系统具有鲁棒性,在所有场景中总体成功率超过90%,验证了基于卸载式LLM规划方案在现实场景中实现四足机器人自主部署的可行性。

0
下载
关闭预览
Top
微信扫码咨询专知VIP会员