Natural language explanations promise to offer intuitively understandable explanations of a neural network's decision process in complex vision-language tasks, as pursued in recent VL-NLE models. While current models offer impressive performance on task accuracy and explanation plausibility, they suffer from a range of issues: Some models feature a modular design where the explanation generation module is poorly integrated with a separate module for task-answer prediction, employ backbone models trained on limited sets of tasks, or incorporate ad hoc solutions to increase performance on single datasets. We propose to evade these limitations by applying recent advances in large-scale multi-task pretraining of generative Transformer models to the problem of VL-NLE tasks. Our approach outperforms recent models by a large margin, with human annotators preferring the generated explanations over the ground truth in two out of three evaluated datasets. As a novel challenge in VL-NLE research, we propose the problem of multi-task VL-NLE and show that jointly training on multiple tasks can increase the explanation quality. We discuss the ethical implications of high-quality NLE generation and other issues in recent VL-NLE research.
翻译:自然语言解释为理解复杂的视觉语言任务中神经网络决策过程提供了直观易懂的解释,这是近期VL-NLE模型追求的目标。虽然当前模型在任务准确性和解释可信度方面表现出色,但它们存在一系列问题:某些模型采用分离的解释生成模块和任务答案预测模块,集成较差;使用训练于有限任务集的Backbone模型;或者采用临时的解决方案来提高单个数据集的性能。我们提出了一种方法,借鉴最近在生成变压器模型的大规模多任务预训练方面的进展,解决VL-NLE任务中的这些限制。相比最近的模型,我们的方法具有明显的优势,在两个评估数据集中,人类注释者更喜欢生成出的解释而不是实际的答案。作为VL-NLE研究中的一个新挑战,我们提出了多任务VL-NLE的问题,并显示联合多个任务的训练可以提高解释质量。我们讨论了高质量NLE生成的伦理影响和最近VL-NLE研究中的其他问题。