Driver Monitoring Systems (DMSs) are crucial for safe hand-over actions in Level-2+ self-driving vehicles. State-of-the-art DMSs leverage multiple sensors mounted at different locations to monitor the driver and the vehicle's interior scene and employ decision-level fusion to integrate these heterogenous data. However, this fusion method may not fully utilize the complementarity of different data sources and may overlook their relative importance. To address these limitations, we propose a novel multiview multimodal driver monitoring system based on feature-level fusion through multi-head self-attention (MHSA). We demonstrate its effectiveness by comparing it against four alternative fusion strategies (Sum, Conv, SE, and AFF). We also present a novel GPU-friendly supervised contrastive learning framework SuMoCo to learn better representations. Furthermore, We fine-grained the test split of the DAD dataset to enable the multi-class recognition of drivers' activities. Experiments on this enhanced database demonstrate that 1) the proposed MHSA-based fusion method (AUC-ROC: 97.0\%) outperforms all baselines and previous approaches, and 2) training MHSA with patch masking can improve its robustness against modality/view collapses. The code and annotations are publicly available.
翻译:司机监测系统(DMS)对于Level-2+自动驾驶汽车的安全移交至关键。最先进的DMS利用安装在不同位置的多个传感器来监测驾驶员和车辆内部场景,并采用决策级融合来集成这些异构数据。然而,该融合方法可能无法充分利用不同数据源的互补性并可能忽视它们的相对重要性。为了解决这些限制,我们提出了一种基于多头自注意力机制的新型多视角多模态司机监测系统来进行特征级融合。我们通过与四种其他融合策略(Sum、Conv、SE和AFF)进行比较来证明其有效性。我们还提出了一种新颖的适用于GPU的监督性对比学习框架 SuMoCo,以学习更好的表示。此外,我们细分了DAD数据集的测试集,以实现对驾驶员活动的多类别识别。对这个增强型数据集的实验表明:1)所提出的基于MHSA的融合方法(AUC-ROC:97.0%)优于所有基线和先前方法,并且2)用补丁掩膜训练MHSA可以提高其对模态/视角塌陷的鲁棒性。代码和注释已公开发布。