The ability for robotic systems to understand human language and execute grasping actions is a pivotal challenge in the field of robotics. In target-oriented grasping, prior researches achieve matching human textual commands with images of target objects. However, these works are hard to understand complex or flexible instructions. Moreover, these works lack the capability to autonomously assess the feasibility of instructions, leading to blindly execute grasping tasks even there is no target object. In this paper, we introduce a combination model called QwenGrasp, which combines a large vision language model with a 6-DoF grasp network. By leveraging a pre-trained large vision language model, our approach is capable of working in open-world with natural human language environments, accepting complex and flexible instructions. Furthermore, the specialized grasp network ensures the effectiveness of the generated grasp pose. A series of experiments conducted in real world environment show that our method exhibits a superior ability to comprehend human intent. Additionally, when accepting erroneous instructions, our approach has the capability to suspend task execution and provide feedback to humans, improving safety.
翻译:暂无翻译