The possible consequences for the same context may vary depending on the situation we refer to. However, current studies in natural language processing do not focus on situated commonsense reasoning under multiple possible scenarios. This study frames this task by asking multiple questions with the same set of possible endings as candidate answers, given a short story text. Our resulting dataset, Possible Stories, consists of more than 4.5K questions over 1.3K story texts in English. We discover that even current strong pretrained language models struggle to answer the questions consistently, highlighting that the highest accuracy in an unsupervised setting (60.2%) is far behind human accuracy (92.5%). Through a comparison with existing datasets, we observe that the questions in our dataset contain minimal annotation artifacts in the answer options. In addition, our dataset includes examples that require counterfactual reasoning, as well as those requiring readers' reactions and fictional information, suggesting that our dataset can serve as a challenging testbed for future studies on situated commonsense reasoning.
翻译:同一背景的可能后果可能因我们所提到的情况而不同。 但是, 自然语言处理中目前的研究并不侧重于多种可能情况下的常识推理。 本研究将这项任务设定为向多个问题提出与候选者答案相同的一组可能的结尾, 给出一个简短的故事文本。 我们的数据集“ 可能的故事” 由超过4.5K的问题组成, 超过英文1.3K故事文本。 我们发现,即使是目前强大的预先训练的语言模型也难以始终如一地回答问题,强调未经监督的设置中最高精度(60.2%)远远低于人类精确度(92.5 % ) 。 通过与现有的数据集进行比较,我们发现,我们数据集中的问题含有答案选项中最起码的注释。 此外, 我们的数据集包括需要反事实推理的例子,以及需要读者反应和虚构信息的例子。 我们的数据集可以作为未来研究常见推理的富有挑战性的测试点。