Circuit representation learning is a promising research direction in the electronic design automation (EDA) field. With sufficient data for pre-training, the learned general yet effective representation can help to solve multiple downstream EDA tasks by fine-tuning it on a small set of task-related data. However, existing solutions only target combinational circuits, significantly limiting their applications. In this work, we propose DeepSeq, a novel representation learning framework for sequential netlists. Specifically, we introduce a dedicated graph neural network (GNN) with a customized propagation scheme to exploit the temporal correlations between gates in sequential circuits. To ensure effective learning, we propose to use a multi-task training objective with two sets of strongly related supervision: logic probability and transition probability at each node. A novel dual attention aggregation mechanism is introduced to facilitate learning both tasks efficiently. Experimental results on various benchmark circuits show that DeepSeq outperforms other GNN models for sequential circuit learning. We evaluate the generalization capability of DeepSeq on a downstream power estimation task. After fine-tuning, DeepSeq can accurately estimate power across various circuits under different workloads.
翻译:电路代表制学习是电子设计自动化(EDA)领域一个很有希望的研究方向。有了足够的培训前数据,学习到的一般而有效的代表制可以通过微调与任务有关的一小套数据,帮助解决多个下游的EDA任务。然而,现有的解决方案只针对组合电路,大大限制其应用。在这项工作中,我们建议为顺序网名单建立一个新型的演示学习框架DeepSeq。具体地说,我们引入了一个专用的图形神经网络(GNN),配有一个定制的传播计划,以利用下游电路门之间的时间相关性。为了确保有效的学习,我们提议使用一个多任务培训目标,并配有两套密切相关的监督:逻辑概率和每个节点的过渡概率。引入了一个新的双重关注集成机制,以便利有效地学习这两项任务。各种基准电路的实验结果显示,DeepSeq比其他GNN电路学习的模型更完美。我们评估下游电路估计任务时的DeepSeq总能力。在进行微调后,DepSectionSeq可以准确估计不同电路段下游能力。</s>