Despite the stride made by machine learning (ML) based performance modeling, two major concerns that may impede production-ready ML applications in EDA are stringent accuracy requirements and generalization capability. To this end, we propose hybrid graph neural network (GNN) based approaches towards highly accurate quality-of-result (QoR) estimations with great generalization capability, specifically targeting logic synthesis optimization. The key idea is to simultaneously leverage spatio-temporal information from hardware designs and logic synthesis flows to forecast performance (i.e., delay/area) of various synthesis flows on different designs. The structural characteristics inside hardware designs are distilled and represented by GNNs; the temporal knowledge (i.e., relative ordering of logic transformations) in synthesis flows can be imposed on hardware designs by combining a virtually added supernode or a sequence processing model with conventional GNN models. Evaluation on 3.3 million data points shows that the testing mean absolute percentage error (MAPE) on designs seen and unseen during training are no more than 1.2% and 3.1%, respectively, which are 7-15X lower than existing studies.
翻译:尽管基于机器学习(ML)的性能建模取得了进步,但两大问题可能阻碍EDA中为生产做好准备的 ML 应用,即严格的精确要求和一般化能力。为此,我们提出基于混合图形神经网络(GNN)的混合图形神经网络(GNN)方法,以高度精确的优质结果(QOR)估算能力为基础,具体针对逻辑合成优化。关键的想法是同时利用硬件设计和逻辑合成流程中各种合成流程的时空信息,预测不同设计中的性能(即延迟/区域)。硬件设计的结构特征由GNNs蒸馏和代表;合成流程中的时间性知识(即逻辑转换的相对顺序)可以通过将几乎添加的超级节点或序列处理模型与常规的GNNN模型相结合,对硬件设计中的绝对百分比错误(MAPE)在培训期间被看到和无法看到的测试中分别不超过1.2%和3.1%,比现有研究低7-15X。