Logic optimization is an NP-hard problem commonly approached through hand-engineered heuristics. We propose to combine graph convolutional networks with reinforcement learning and a novel, scalable node embedding method to learn which local transforms should be applied to the logic graph. We show that this method achieves a similar size reduction as ABC on smaller circuits and outperforms it by 1.5-1.75x on larger random graphs.
翻译:逻辑优化是一个常见的NP硬性问题,通过手工工程超自然学处理。 我们提议将图形变异网络与强化学习和新颖的、可缩放的节点嵌入方法相结合,以学习对逻辑图应采用哪种本地变异。 我们显示,这种方法的大小缩小与较小电路ABC的ABC规模缩小类似,在更大的随机图上比ABC的大小缩小1.5-1.75x。