In hazardous environments like nuclear facilities, robotic systems are essential for executing tasks that would otherwise expose humans to dangerous radiation levels, which pose severe health risks and can be fatal. However, many operations in the nuclear environment require teleoperating robots, resulting in a significant cognitive load on operators as well as physical strain over extended periods of time. To address this challenge, we propose enhancing the teleoperation system with an assistive model capable of predicting operator intentions and dynamically adapting to their needs. The machine learning model processes robotic arm force data, analyzing spatiotemporal patterns to accurately detect the ongoing task before its completion. To support this approach, we collected a diverse dataset from teleoperation experiments involving glovebox tasks in nuclear applications. This dataset encompasses heterogeneous spatiotemporal data captured from the teleoperation system. We employ a hybrid Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) model to learn and forecast operator intentions based on the spatiotemporal data. By accurately predicting these intentions, the robot can execute tasks more efficiently and effectively, requiring minimal input from the operator. Our experiments validated the model using the dataset, focusing on tasks such as radiation surveys and object grasping. The proposed approach demonstrated an F1-score of 89% for task classification and an F1-score of 86% classification forecasted operator intentions over a 5-second window. These results highlight the potential of our method to improve the safety, precision, and efficiency of robotic operations in hazardous environments, thereby significantly reducing human radiation exposure.
翻译:暂无翻译