With the growing need to effectively support workforce upskilling in the manufacturing sector, virtual reality is gaining popularity as a scalable training solution. However, most current systems are designed as static, step-by-step tutorials and do not adapt to a learner's needs or cognitive load, which is a critical factor in learning and longterm retention. We address this limitation with CLAd-VR, an adaptive VR training system that integrates realtime EEG-based sensing to measure the learner's cognitive load and adapt instruction accordingly, specifically for domain-specific tasks in manufacturing. The system features a VR training module for a precision drilling task, designed with multimodal instructional elements including animations, text, and video. Our cognitive load sensing pipeline uses a wearable EEG device to capture the trainee's neural activity, which is processed through an LSTM model to classify their cognitive load as low, optimal, or high in real time. Based on these classifications, the system dynamically adjusts task difficulty and delivers adaptive guidance using voice guidance, visual cues, or ghost hand animations. This paper introduces CLAd-VR system's architecture, including the EEG sensing hardware, real-time inference model, and adaptive VR interface.
翻译:暂无翻译