Fine-tuned language models use greedy decoding to answer reading comprehension questions with relative success. However, this approach does not ensure that the answer is a span in the given passage, nor does it guarantee that it is the most probable one. Does greedy decoding actually perform worse than an algorithm that does adhere to these properties? To study the performance and optimality of greedy decoding, we present exact-extract, a decoding algorithm that efficiently finds the most probable answer span in the context. We compare the performance of T5 with both decoding algorithms on zero-shot and few-shot extractive question answering. When no training examples are available, exact-extract significantly outperforms greedy decoding. However, greedy decoding quickly converges towards the performance of exact-extract with the introduction of a few training examples, becoming more extractive and increasingly likelier to generate the most probable span as the training set grows. We also show that self-supervised training can bias the model towards extractive behavior, increasing performance in the zero-shot setting without resorting to annotated examples. Overall, our results suggest that pretrained language models are so good at adapting to extractive question answering, that it is often enough to fine-tune on a small training set for the greedy algorithm to emulate the optimal decoding strategy.
翻译:精密的语言模型使用贪婪的解码来解解码, 解码来解读理解问题, 相对成功。 但是, 这种方法并不能确保答案在给定段落中是一个宽度, 也不能保证它是最有可能的。 贪婪的解码是否实际上比坚持这些属性的算法表现得更差? 为了研究贪婪解码的性能和最佳性能, 我们提出了精确的解码算法, 一种在背景中有效找到最可能答案的解码算法。 我们比较了T5的性能, 与零点和微点的采掘问题解码算法的两种解码算法比较。 当没有培训范例时, 精确的解码大大超越了贪婪解码的解码。 然而, 贪婪的解码过程很快会随着几个培训范例的引入而导致精确解码的性能迅速接近于精确的解码性能。 我们还表明, 自我统一的培训可以将模型偏向采掘行为模式偏向模式, 在零点解解解答的回答的回答的例子中, 我们的精确的结果表明, 预先的语文模型往往适应了精细的语言模型, 使精细的精化的变炼战略适应到精准性战略, 。