Lecture slide presentations, a sequence of pages that contain text and figures accompanied by speech, are constructed and presented carefully in order to optimally transfer knowledge to students. Previous studies in multimedia and psychology attribute the effectiveness of lecture presentations to their multimodal nature. As a step toward developing AI to aid in student learning as intelligent teacher assistants, we introduce the Multimodal Lecture Presentations dataset as a large-scale benchmark testing the capabilities of machine learning models in multimodal understanding of educational content. Our dataset contains aligned slides and spoken language, for 180+ hours of video and 9000+ slides, with 10 lecturers from various subjects (e.g., computer science, dentistry, biology). We introduce two research tasks which are designed as stepping stones towards AI agents that can explain (automatically captioning a lecture presentation) and illustrate (synthesizing visual figures to accompany spoken explanations) educational content. We provide manual annotations to help implement these two research tasks and evaluate state-of-the-art models on them. Comparing baselines and human student performances, we find that current models struggle in (1) weak crossmodal alignment between slides and spoken text, (2) learning novel visual mediums, (3) technical language, and (4) long-range sequences. Towards addressing this issue, we also introduce PolyViLT, a multimodal transformer trained with a multi-instance learning loss that is more effective than current approaches. We conclude by shedding light on the challenges and opportunities in multimodal understanding of educational presentations.
翻译:讲座幻灯片展示,是包含文字和带有演讲的图表的系列页面,是精心构建和展示的,目的是以最佳方式向学生传授知识。以前在多媒体和心理学方面的研究将讲座介绍的有效性归因于其多式联运的性质。作为发展AI的一步,我们作为智能教师助理,将多式讲座演示数据集作为大规模基准,测试机器学习模型在多式理解教育内容方面的能力。我们的数据集包含统一的幻灯片和口语,用于180小时以上的视频和9000多张幻灯片,有来自不同科目(例如计算机科学、牙医、生物学)的10名讲师。我们引入了两项研究任务,这些研究任务设计为向AI代理机构提供跳板,这些代理机构可以解释(自动为讲座介绍说明),并演示教育内容(将视觉数字与口头解释结合起来)。我们提供手工说明,帮助执行这两项研究任务,评价这方面的最新艺术模型。比较基线和人类学生表现,我们发现目前的模型在(1) 幻灯片和口头文字(例如计算机、牙医、生物学、生物学、生物学、数学、数学、数学、数学、甚式、甚式学习新式学习、高式学习、多式学习模式、高式学习、高式学习、高式、高式学习、高式学习、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、高式、我们式、我们式、高式、高式、高式、高式、我们式、高式、高式、高式、高式、高式、高、高、高、高、高、高、高式、高、高、高、我们式、我们式、高、多式、多式、多式、多式、多式、