The ability to monitor audience reactions is critical when delivering presentations. However, current videoconferencing platforms offer limited solutions to support this. This work leverages recent advances in affect sensing to capture and facilitate communication of relevant audience signals. Using an exploratory survey (N = 175), we assessed the most relevant audience responses such as confusion, engagement, and head-nods. We then implemented AffectiveSpotlight, a Microsoft Teams bot that analyzes facial responses and head gestures of audience members and dynamically spotlights the most expressive ones. In a within-subjects study with 14 groups (N = 117), we observed that the system made presenters significantly more aware of their audience, speak for a longer period of time, and self-assess the quality of their talk more similarly to the audience members, compared to two control conditions (randomly-selected spotlight and default platform UI). We provide design recommendations for future affective interfaces for online presentations based on feedback from the study.
翻译:然而,目前的视像会议平台提供了有限的解决方案来支持这一点。这项工作利用了影响感测的最新进展来捕捉和方便相关受众信号的交流。我们利用探索性调查(N=175)评估了最为相关的受众回应,如混乱、接触和头点等。我们随后实施了AffectiveSpotlight,即微软团队机器人,用来分析受众成员面部反应和头部动作,并动态地聚焦于最能表达的人群。在对14个群体进行的一项内科研究(N=117)中,我们观察到,该系统使主讲者大大提高了对受众的认识,在较长的时间内发言,并自评了他们与受众成员谈话的质量更加相似,而相比之下,我们采用了两种控制条件(随机选择的焦点和默认平台UI)。我们根据研究反馈为在线演示设计了未来影响界面的建议。