We propose a multi-task, probabilistic approach to facilitate distantly supervised relation extraction by bringing closer the representations of sentences that contain the same Knowledge Base pairs. To achieve this, we bias the latent space of sentences via a Variational Autoencoder (VAE) that is trained jointly with a relation classifier. The latent code guides the pair representations and influences sentence reconstruction. Experimental results on two datasets created via distant supervision indicate that multi-task learning results in performance benefits. Additional exploration of employing Knowledge Base priors into the VAE reveals that the sentence space can be shifted towards that of the Knowledge Base, offering interpretability and further improving results.
翻译:我们建议采用多任务、概率方法,通过更接近包含相同知识库对子的句子的表述方式,促进远程监督的分离关系。为此,我们通过与关系分类员共同培训的变式自动编码器(VAE),偏向判决的潜在空间。潜型代码指导对子的表述并影响刑罚的重建。通过远程监督创建的两个数据集的实验结果表明,多任务学习在绩效收益方面产生结果。对使用知识库之前进入VAE的进一步探索显示,句子空间可以转向知识库,提供可解释性并进一步改进结果。