Accurately grading open-ended assignments in large or massive open online courses (MOOCs) is non-trivial. Peer review is a promising solution but can be unreliable due to few reviewers and an unevaluated review form. To date, no work has 1) leveraged sentiment analysis in the peer-review process to inform or validate grades or 2) utilized aspect extraction to craft a review form from what students actually communicated. Our work utilizes, rather than discards, student data from review form comments to deliver better information to the instructor. In this work, we detail the process by which we create our domain-dependent lexicon and aspect-informed review form as well as our entire sentiment analysis algorithm which provides a fine-grained sentiment score from text alone. We end by analyzing validity and discussing conclusions from our corpus of over 6800 peer reviews from nine courses to understand the viability of sentiment in the classroom for increasing the information from and reliability of grading open-ended assignments in large courses.
翻译:在大型或大规模开放式在线课程(MOOCs)中,准确分级不限名额的大型或大型开放在线课程(MOOCs)是非三重性的。同侪审查是一个很有希望的解决方案,但由于审查者人数少,而且审查表格未经评价,因此可能不可靠。迄今为止,没有任何工作:(1) 利用同侪审查过程中的情绪分析来告知或验证等级,或者(2) 从学生实际交流的内容中利用侧面抽调来编成一个审查表格。我们的工作利用而不是丢弃,从审查中得出的学生数据来向教员提供更好的信息。在这项工作中,我们详细介绍了我们创建我们依赖域的词汇和内容知情审查表格的过程,以及我们仅从文本中提供微量感应感应的全程情绪分析算法。我们最后从9个课程中分析我们6800多份同侪审查的文集中得出的结论,从9个课程中分析其有效性并讨论其结论,以了解课堂中各种情绪的可行性,以便增加来自大型课程中定档期任务的信息和可靠性。