Trust in robots has been gathering attention from multiple directions, as it has special relevance in the theoretical descriptions of human-robot interactions. It is essential for reaching high acceptance and usage rates of robotic technologies in society, as well as for enabling effective human-robot teaming. Researchers have been trying to model the development of trust in robots to improve the overall rapport between humans and robots. Unfortunately, the miscalibration of trust in automation is a common issue that jeopardizes the effectiveness of automation use. It happens when a user's trust levels are not appropriate to the capabilities of the automation being used. Users can be: under-trusting the automation -- when they do not use the functionalities that the machine can perform correctly because of a lack of trust; or over-trusting the automation -- when, due to an excess of trust, they use the machine in situations where its capabilities are not adequate. The main objective of this work is to examine driver's trust development in the ADS. We aim to model how risk factors (e.g.: false alarms and misses from the ADS) and the short-term interactions associated with these risk factors influence the dynamics of drivers' trust in the ADS. The driving context facilitates the instrumentation to measure trusting behaviors, such as drivers' eye movements and usage time of the automated features. Our findings indicate that a reliable characterization of drivers' trusting behaviors and a consequent estimation of trust levels is possible. We expect that these techniques will permit the design of ADSs able to adapt their behaviors to attempt to adjust driver's trust levels. This capability could avoid under- and over-trusting, which could harm their safety or their performance.
翻译:对机器人的信任一直受到多方面的注意,因为它在人类机器人互动的理论描述中具有特殊的相关性。这对于在社会中实现机器人技术的高接受率和使用率至关重要,对于实现社会对机器人技术的高接受率和使用率至关重要,对于实现有效的人类机器人团队合作也至关重要。研究人员一直在试图模拟机器人信任的发展,以改善人类和机器人之间的总体关系。不幸的是,对自动化信任的调和是一个常见问题,会危及自动化的使用效率。当用户的信任水平与正在使用的自动化能力不相适应时,用户可能处于信任水平。用户可以是:信任自动化的尝试不力 -- -- 当他们不使用机器由于缺乏信任而能够正确运行的功能时,以及当由于信任的过度,他们使用机器来改善人与机器人之间的总体关系。 这项工作的主要目的是检查ADS中驱动者信任水平的发展。我们的目标是模拟风险因素(例如:错误的警报和从ADS中错失)的尝试。 用户可以是相信自动化的尝试:在自动化背景下,当他们没有使用机器能够正确运行的功能时,当他们不能使用机器的功能;或者过度信任 -- 驱动者的短期的动作能推动ADR的动作的动作的动作的动作的动作的动作,从而显示他们的信任水平。