Graph convolutional networks (GCNs) have shown remarkable learning capabilities when processing graph-structured data found inherently in many application areas. GCNs distribute the outputs of neural networks embedded in each vertex over multiple iterations to take advantage of the relations captured by the underlying graphs. Consequently, they incur a significant amount of computation and irregular communication overheads, which call for GCN-specific hardware accelerators. To this end, this paper presents a communication-aware in-memory computing architecture (COIN) for GCN hardware acceleration. Besides accelerating the computation using custom compute elements (CE) and in-memory computing, COIN aims at minimizing the intra- and inter-CE communication in GCN operations to optimize the performance and energy efficiency. Experimental evaluations with widely used datasets show up to 105x improvement in energy consumption compared to state-of-the-art GCN accelerator.
翻译:图形革命网络(GCN)在处理许多应用领域固有的图形结构数据时表现出了非凡的学习能力。GCN将嵌入每个顶端的神经网络的产出分布在多个迭代上,以利用底图所捕捉的关系。因此,它们产生了大量的计算和不规则的通信间接费用,需要GCN专用硬件加速器。为此,本文件为GCN硬件加速提供了一种具有通信觉悟的模拟计算结构(COIN)。除了加快使用定制计算元素(CE)和模拟计算外,COIN旨在尽量减少GCN操作中的神经网络内部和内部通信,以优化性能和能效。与广泛使用的数据集相比,实验性评估显示,与最先进的GCN加速器相比,能源消耗水平提高了105x。