Deep neural networks have accelerated inverse-kinematics (IK) inference to the point where low cost manipulators can execute complex trajectories in real time, yet the opaque nature of these models contradicts the transparency and safety requirements emerging in responsible AI regulation. This study proposes an explainability centered workflow that integrates Shapley-value attribution with physics-based obstacle avoidance evaluation for the ROBOTIS OpenManipulator-X. Building upon the original IKNet, two lightweight variants-Improved IKNet with residual connections and Focused IKNet with position-orientation decoupling are trained on a large, synthetically generated pose-joint dataset. SHAP is employed to derive both global and local importance rankings, while the InterpretML toolkit visualizes partial-dependence patterns that expose non-linear couplings between Cartesian poses and joint angles. To bridge algorithmic insight and robotic safety, each network is embedded in a simulator that subjects the arm to randomized single and multi-obstacle scenes; forward kinematics, capsule-based collision checks, and trajectory metrics quantify the relationship between attribution balance and physical clearance. Qualitative heat maps reveal that architectures distributing importance more evenly across pose dimensions tend to maintain wider safety margins without compromising positional accuracy. The combined analysis demonstrates that explainable AI(XAI) techniques can illuminate hidden failure modes, guide architectural refinements, and inform obstacle aware deployment strategies for learning based IK. The proposed methodology thus contributes a concrete path toward trustworthy, data-driven manipulation that aligns with emerging responsible-AI standards.
翻译:深度神经网络已将逆运动学(IK)推理加速至低成本机械臂能够实时执行复杂轨迹的程度,然而这些模型的不透明性与负责任人工智能监管中日益凸显的透明度和安全性要求相矛盾。本研究提出以可解释性为核心的工作流程,将沙普利值归因与基于物理的避障评估相结合,应用于ROBOTIS OpenManipulator-X机械臂。基于原始IKNet,我们在大规模合成生成的位姿-关节数据集上训练了两个轻量级变体:采用残差连接的改进型IKNet,以及具有位置-姿态解耦结构的聚焦型IKNet。通过SHAP方法推导全局与局部重要性排序,并利用InterpretML工具包可视化部分依赖模式,揭示笛卡尔位姿与关节角度之间的非线性耦合关系。为连接算法洞察与机器人安全性,每个网络均被嵌入仿真环境,使机械臂在随机生成的单障碍物与多障碍物场景中运行;通过正运动学计算、基于胶囊体的碰撞检测及轨迹度量,量化归因平衡与物理避障间距之间的关系。定性热图显示,在位姿维度间更均匀分配重要性的网络架构,往往能在保持位置精度的同时维持更大的安全裕度。综合分析表明,可解释人工智能(XAI)技术能够揭示隐藏的失效模式,指导架构优化,并为基于学习的逆运动学提供障碍物感知部署策略。因此,所提出的方法论为构建符合新兴负责任人工智能标准的可信数据驱动操作系统提供了具体路径。