Aspect-level sentiment classification aims to identify the sentiment polarity towards a specific aspect term in a sentence. Most current approaches mainly consider the semantic information by utilizing attention mechanisms to capture the interactions between the context and the aspect term. In this paper, we propose to employ graph convolutional networks (GCNs) on the dependency tree to learn syntax-aware representations of aspect terms. GCNs often show the best performance with two layers, and deeper GCNs do not bring additional gain due to over-smoothing problem. However, in some cases, important context words cannot be reached within two hops on the dependency tree. Therefore we design a selective attention based GCN block (SA-GCN) to find the most important context words, and directly aggregate these information into the aspect-term representation. We conduct experiments on the SemEval 2014 Task 4 datasets. Our experimental results show that our model outperforms the current state-of-the-art.
翻译:外观感知分类旨在确定对一个句子中某个特定术语的情感两极分化。 多数当前方法主要通过利用关注机制来考虑语义信息, 以捕捉上下文和上下文之间的相互作用。 在本文中, 我们提议在依附树上使用图形革命网络(GCNs)来学习侧面术语的超常表达方式。 GCNs通常显示两个层面的最佳表现, 更深的GCN不会因为过度移动问题而带来额外收益 。 但是, 在某些情况下, 在依赖树上的两个跳跃中无法达到重要的上下文词 。 因此, 我们设计了一种基于 GCN 区( SA- GCN) 的选择性关注, 以找到最重要的上下文词, 并将这些信息直接汇总到侧面表达方式中 。 我们在SemEval 2014 任务 4数据集上进行实验。 我们的实验结果表明, 我们的模型超越了当前的艺术状态 。