Adaptive user interfaces (UIs) automatically change an interface to better support users' tasks. Recently, machine learning techniques have enabled the transition to more powerful and complex adaptive UIs. However, a core challenge for adaptive user interfaces is the reliance on high-quality user data that has to be collected offline for each task. We formulate UI adaptation as a multi-agent reinforcement learning problem to overcome this challenge. In our formulation, a user agent mimics a real user and learns to interact with a UI. Simultaneously, an interface agent learns UI adaptations to maximize the user agent's performance. The interface agent learns the task structure from the user agent's behavior and, based on that, can support the user agent in completing its task. Our method produces adaptation policies that are learned in simulation only and, therefore, does not need real user data. Our experiments show that learned policies generalize to real users and achieve on par performance with data-driven supervised learning baselines.
翻译:自适应用户界面可以自动更改界面以更好地支持用户的任务。近年来,机器学习技术已经使得自适应用户界面向更强大和复杂的方向转变。然而,自适应用户界面的一个核心挑战是依赖于要为每个任务离线收集的高质量用户数据。我们将用户界面自适应任务定义为多智能体强化学习的问题,以克服这个挑战。在我们的定义中,用户代理模仿实际用户并学习与界面的交互。同时,接口代理学习用户界面调整以最大化用户代理的性能。接口代理通过用户代理的行为学习任务结构,并基于此支持用户代理完成其任务。我们的方法仅在模拟中学习自适应策略,因此不需要真实用户数据。实验表明,学习得到的策略适用于真实用户,并且与数据驱动的监督学习基线性能相当。