Center touchscreens are the main HMI (Human-Machine Interface) between the driver and the vehicle. They are becoming, larger, increasingly complex and replace functions that could previously be controlled using haptic interfaces. To ensure that touchscreen HMI can be operated safely, they are subject to strict regulations and elaborate test protocols. Those methods and user trials require fully functional prototypes and are expensive and time-consuming. Therefore it is desirable to estimate the workload of specific interfaces or interaction sequences as early as possible in the development process. To address this problem, we envision a model-based approach that, based on the combination of user interactions and UI elements, can predict the secondary task load of the driver when interacting with the center screen. In this work, we present our current status, preliminary results, and our vision for a model-based system build upon large-scale natural driving data.
翻译:中心触摸屏是驱动器和飞行器之间主要的HMI(人类-海洋界面),它们正在变得更大、日益复杂,并取代以前使用机能界面可以控制的功能。为了确保触摸屏HMI能够安全操作,它们必须服从严格的条例和详细的测试程序。这些方法和用户试验需要完全功能性原型,而且费用昂贵和耗时。因此,在开发过程中尽早估计具体接口或互动序列的工作量是可取的。为了解决这一问题,我们设想一种基于模型的方法,根据用户互动和UI要素的结合,在与中心屏幕互动时可以预测驱动器的次要任务负荷。在这项工作中,我们介绍了我们的现状、初步结果和我们对基于模型的系统在大规模自然驱动数据的基础上建立的设想。