Despite the significant progress made by transformer models in machine reading comprehension tasks, they still face limitations in handling complex reasoning tasks due to the absence of explicit knowledge in the input sequence. This paper proposes a novel attention pattern to overcome this limitation, which integrates reasoning knowledge derived from a heterogeneous graph into the transformer architecture using a graph-enhanced self-attention mechanism. The proposed attention pattern comprises three key elements: global-local attention for word tokens, graph attention for entity tokens that exhibit strong attention towards tokens connected in the graph as opposed to those unconnected, and the consideration of the type of relationship between each entity token and word token. This results in optimized attention between the two if a relationship exists. The pattern is coupled with special relative position labels, allowing it to integrate with LUKE's entity-aware self-attention mechanism. The experimental findings corroborate that our model outperforms both the cutting-edge LUKE-Graph and the baseline LUKE model on the ReCoRD dataset that focuses on commonsense reasoning.
翻译:暂无翻译