Logic synthesis is one of the most important steps in design and implementation of digital chips with a big impact on final Quality of Results (QoR). For a most general input circuit modeled by a Directed Acyclic Graph (DAG), many logic synthesis problems such as delay or area minimization are NP-Complete, hence, no optimal solution is available. This is why many classical logic optimization functions tend to follow greedy approaches that are easily trapped in local minima that does not allow improving QoR as much as needed. We believe that Artificial Intelligence (AI) and more specifically Reinforcement Learning (RL) algorithms can help in solving this problem. This is because AI and RL can help minimizing QoR further by exiting from local minima. Our experiments on both open source and industrial benchmark circuits show that significant improvements on important metrics such as area, delay, and power can be achieved by making logic synthesis optimization functions AI-driven. For example, our RL-based rewriting algorithm could improve total cell area post-synthesis by up to 69.3% when compared to a classical rewriting algorithm with no AI awareness.
翻译:逻辑合成是设计和实施数字芯片的最重要步骤之一,对最终结果质量有重大影响。对于由直接环形图(DAG)模型模型的最一般输入电路来说,许多逻辑合成问题,如延迟或区域最小化,都是NP-Compllete,因此没有最佳的解决办法。这就是为什么许多经典逻辑优化功能倾向于采取贪婪的做法,这些方法很容易被困在本地迷你中,无法按需要尽可能改进QOR。我们认为人工智能(AI)和更具体的加强学习(RL)算法可以帮助解决这一问题。这是因为AI和RL可以通过退出本地迷你来帮助进一步最大限度地减少QOR。我们在开放源和工业基准电路上进行的实验表明,通过进行逻辑合成优化功能AI驱动,可以大大改进诸如领域、延迟和权力等重要指标。例如,我们基于RL的重写算法可以改进整个细胞区域后合成算法,比没有AI意识的经典重写算法提高69.3%。