Brain network is a large-scale complex network with scale-free, small-world, and modularity properties, which largely supports this high-efficiency massive system. In this paper, we propose to synthesize brain-network-inspired interconnections for large-scale network-on-chips. Firstly, we propose a method to generate brain-network-inspired topologies with limited scale-free and power-law small-world properties, which have a low total link length and extremely low average hop count approximately proportional to the logarithm of the network size. In addition, given the large-scale applications and the modular topology, we present an application mapping method, including task mapping and deterministic deadlock-free routing, to minimize the power consumption and hop count. Finally, a cycle-accurate simulator BookSim2 is used to validate the architecture performance with different synthetic traffic patterns and large-scale test cases, including real-world communication networks for the graph processing application. Experiments show that, compared with other topologies and methods, the NoC design generated by the proposed method presents significantly lower average hop count and lower average latency. Especially in graph processing applications with a power-law and tightly coupled inter-core communication, the brain-network-inspired NoC has up to 70% lower average hop count and 75% lower average latency than mesh-based NoCs.
翻译:大脑网络是一个大型的复杂网络,其规模没有规模、小世界和模块化特性,它在很大程度上支持了这一高效的大规模系统。在本文中,我们提议为大规模网络连接而合成大脑-网络驱动的互连,用于大规模网络上芯片。首先,我们提出一种方法,用于生成大脑网络驱动的地形,其规模有限,规模没有和动力法小世界特性有限,总连接长度小,平均跳跃计数极低,与网络规模的对数成正比。此外,由于应用规模大,模块表层学,我们提出了一个应用绘图方法,包括任务绘图和确定性无僵局定型路由,以最大限度地减少电力消耗和跳动计数。最后,一个循环精确模拟器模拟器用于以不同的合成交通模式和大型测试案例来验证结构性能,包括用于图形处理应用程序的真实世界通信网络。实验显示,与其他结构和方法相比,拟议方法生成的诺氏设计显示,平均跳动计数和低度无平均平流率和低平均平流层之间,在图形处理中,不具有固定和低位平流中位的平面处理。