The iPhone was introduced only a decade ago in 2007 but has fundamentally changed the way we interact with online information. Mobile devices differ radically from classic command-based and point-and-click user interfaces, now allowing for gesture-based interaction using fine-grained touch and swipe signals. Due to the rapid growth in the use of voice-controlled intelligent personal assistants on mobile devices, such as Microsoft's Cortana, Google Now, and Apple's Siri, mobile devices have become personal, allowing us to be online all the time, and assist us in any task, both in work and in our daily lives, making context a crucial factor to consider. Mobile usage is now exceeding desktop usage, and is still growing at a rapid rate, yet our main ways of training and evaluating personal assistants are still based on (and framed in) classical desktop interactions, focusing on explicit queries, clicks, and dwell time spent. However, modern user interaction with mobile devices is radically different due to touch screens with a gesture- and voice-based control and the varying context of use, e.g., in a car, by bike, often invalidating the assumptions underlying today's user satisfaction evaluation. There is an urgent need to understand voice- and gesture-based interaction, taking all interaction signals and context into account in appropriate ways. We propose a research agenda for developing methods to evaluate and improve context-aware user satisfaction with mobile interactions using gesture-based signals at scale.
翻译:iPhone是十年前在2007年才推出的,但从根本上改变了我们与在线信息互动的方式。移动设备与经典的基于命令和点点击用户界面截然不同,现在允许使用精细触摸和擦拭信号进行基于手势的互动。由于在移动设备上使用语音控制智能个人助理(如微软的Cortana、Google Now和苹果的Siri)的情况迅速增长,移动设备成为个人设备,允许我们随时在线,在工作和日常生活中协助我们执行任何任务,使环境成为需要考虑的关键因素。移动设备的使用现在超过了桌面的使用,并且仍在以快速的速度增长,然而,我们个人助理培训和评价的主要方式仍然基于(和设计在)传统的桌面互动,侧重于明确的询问、点击和花费的时间。然而,现代用户与移动设备的互动之所以大不相同,是因为要用基于手势的控制和基于声音的控制触摸屏幕,以及不同的使用环境,例如汽车、自行车、经常是无效的图像使用率使用率使用率率,而我们用手动、以正确的手势方式对用户的图像进行适当的理解。