Researchers have widely acknowledged the potential of control mechanisms with which end-users of recommender systems can better tailor recommendations. However, few e-learning environments so far incorporate such mechanisms, for example for steering recommended exercises. In addition, studies with adolescents in this context are rare. To address these limitations, we designed a control mechanism and a visualisation of the control's impact through an iterative design process with adolescents and teachers. Then, we investigated how these functionalities affect adolescents' trust in an e-learning platform that recommends maths exercises. A randomised controlled experiment with 76 middle school and high school adolescents showed that visualising the impact of exercised control significantly increases trust. Furthermore, having control over their mastery level seemed to inspire adolescents to reasonably challenge themselves and reflect upon the underlying recommendation algorithm. Finally, a significant increase in perceived transparency suggested that visualising steering actions can indirectly explain why recommendations are suitable, which opens interesting research tracks for the broader field of explainable AI.
翻译:研究人员已广泛认识到控制机制的潜力,建议系统最终用户可以借此更好地调整建议,然而,到目前为止,电子学习环境很少纳入这种机制,例如指导建议的活动;此外,在这方面,针对青少年的研究十分罕见;为解决这些局限性,我们设计了一个控制机制,并通过与青少年和教师的迭接设计程序,直观地看待控制的影响;然后,我们调查这些功能如何影响青少年对一个建议数学练习的电子学习平台的信任;对76名中、高中青少年进行的随机控制实验表明,可视化行使控制的影响大大增加了信任度;此外,控制其掌握水平似乎激励青少年合理挑战自己,并思考基本建议算法;最后,人们认识到的透明性表明,可视化指导行动可以间接地解释为什么建议是合适的,为更广泛的可解释的AI领域开辟有趣的研究轨道。</s>