Question generation has recently gained a lot of research interest, especially with the advent of large language models. In and of itself, question generation can be considered 'AI-hard', as there is a lack of unanimously agreed sense of what makes a question 'good' or 'bad'. In this paper, we tackle two fundamental problems in parallel: on one hand, we try to solve the scaling problem, where question-generation and answering applications have to be applied to a massive amount of text without ground truth labeling. The usual approach to solve this problem is to either downsample or summarize. However, there are critical risks of misinformation with these approaches. On the other hand, and related to the misinformation problem, we try to solve the 'safety' problem, as many public institutions rely on a much higher level of accuracy for the content they provide. We introduce an adversarial approach to tackle the question generation safety problem with scale. Specifically, we designed a question-answering system that specifically prunes out unanswerable questions that may be generated, and further increases the quality of the answers that are generated. We build a production-ready, easily-plugged pipeline that can be used on any given body of text, that is scalable and immune from generating any hate speech, profanity, or misinformation. Based on the results, we are able to generate more than six times the number of quality questions generated by the abstractive approach, with a perceived quality being 44% higher, according to a survey of 168 participants.
翻译:问题生成最近引起了许多研究兴趣, 特别是随着大量语言模型的出现, 问题生成最近引起了许多研究兴趣。 问题生成本身可以被认为是“ AI-hard ”, 因为缺乏一致一致认同的对“ 良好” 或“ 坏 ” 问题的答案。 在本文中,我们同时处理两个基本问题: 一方面, 我们试图解决规模问题, 问题生成和回答应用程序必须应用到大量没有地面真相标签的文本中。 解决这一问题的通常方法是低调或概括。 然而, 这些方法存在错误信息的重大风险。 另一方面, 与错误信息问题相关的是, 我们试图解决“ 安全” 问题, 因为许多公共机构对它们提供的内容依赖更高程度的准确性。 我们用一种对抗性的方法来解决生成问题的规模安全问题。 具体地说, 我们设计了一个问题解答系统, 具体地排除了无法解答的问题, 并进一步提高答案的质量。 但是, 我们用一种更准确的、 容易解答的、 质量调查 和 质量调查, 我们用一个能理解的、 质量 质量 的路径, 产生一个比 免疫性 的版本 的版本 产生 任何 的版本 。