Increasing autonomous vehicles (AVs) in transportation systems makes effective interactions between AVs and pedestrians indispensable. External human--machine interface (eHMI), which employs visual or auditory cues to explicitly convey vehicle behaviors can compensate for the loss of human-like interactions and enhance AV--pedestrian cooperation. To facilitate faster intent convergence between pedestrian and AVs, this study incorporates an adaptive interaction mechanism into eHMI based on pedestrian intent recognition, namely IR-eHMI. IR-eHMI dynamically detects and infers the behavioral intentions of both pedestrians and AVs through identifying their cooperation states. The proposed interaction framework is implemented and evaluated on a virtual reality (VR) experimental platform to demonstrate its effectiveness through statistical analysis. Experimental results show that IR-eHMI significantly improves crossing efficiency, reduces gaze distraction while maintaining interaction safety compared to traditional fixed-distance eHMI. This adaptive and explicit interaction mode introduces an innovative procedural paradigm for AV--pedestrian cooperation.
翻译:随着自动驾驶车辆在交通系统中的日益增多,实现自动驾驶车辆与行人之间的有效交互变得不可或缺。外部人机交互界面通过视觉或听觉线索显式传达车辆行为,能够弥补类人交互的缺失并增强自动驾驶车辆与行人的协作。为促进行人与自动驾驶车辆之间更快速的意图趋同,本研究将一种基于行人意图识别的自适应交互机制融入外部人机交互界面,即IR-eHMI。IR-eHMI通过识别双方协作状态,动态检测并推断行人及自动驾驶车辆的行为意图。所提出的交互框架在虚拟现实实验平台上实现并评估,通过统计分析验证其有效性。实验结果表明,与传统固定距离外部人机交互界面相比,IR-eHMI在保持交互安全性的同时,显著提升了通行效率并减少了视线分散。这种自适应且显式的交互模式为自动驾驶车辆与行人协作引入了创新的程序范式。