Interpretable rationales for model predictions are crucial in practical applications. We develop neural models that possess an interpretable inference process for dependency parsing. Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set. The training edges are explicitly used for the predictions; thus, it is easy to grasp the contribution of each edge to the predictions. Our experiments show that our instance-based models achieve competitive accuracy with standard neural models and have the reasonable plausibility of instance-based explanations.
翻译:模型预测的可解释的理由在实际应用中至关重要。我们开发了神经模型,这些模型具有可解释的依赖分析推论过程。我们的模型采用了基于实例的推论,通过将依赖边缘与培训组合中的边缘进行对比来提取和标注依赖边缘。培训边缘被明确用于预测;因此,很容易掌握每种边缘对预测的贡献。我们的实验表明,我们基于实例的模型通过标准神经模型实现了竞争性的准确性,并且具有基于实例的解释的合理性。