Test point insertion (TPI) is a widely used technique for testability enhancement, especially for logic built-in self-test (LBIST) due to its relatively low fault coverage. In this paper, we propose a novel TPI approach based on deep reinforcement learning (DRL), named DeepTPI. Unlike previous learning-based solutions that formulate the TPI task as a supervised-learning problem, we train a novel DRL agent, instantiated as the combination of a graph neural network (GNN) and a Deep Q-Learning network (DQN), to maximize the test coverage improvement. Specifically, we model circuits as directed graphs and design a graph-based value network to estimate the action values for inserting different test points. The policy of the DRL agent is defined as selecting the action with the maximum value. Moreover, we apply the general node embeddings from a pre-trained model to enhance node features, and propose a dedicated testability-aware attention mechanism for the value network. Experimental results on circuits with various scales show that DeepTPI significantly improves test coverage compared to the commercial DFT tool. The code of this work is available at https://github.com/cure-lab/DeepTPI.
翻译:测试点插入(TPI)是一种广泛使用的提高可测试性的技术,特别是逻辑内置自我测试(LBIST)的技术,因为其覆盖率相对较低。在本文中,我们提出基于深强化学习(DREPL)的新TPI方法,名为DeepTPI。与以前将TPI任务发展成监管学习问题的基于学习的解决方案不同,我们训练了一个新的DRL代理器,该代理器作为图形神经网络(GNN)和深QLear网络(DQN)的结合,以最大限度地提高测试覆盖率。具体地说,我们模拟电路作为定向图形,设计一个基于图形的值网络,以估计插入不同测试点的行动值。DRL代理器的政策被定义为选择具有最大价值的行动。此外,我们从预先培训的模型中应用一般节点嵌入,以强化节点特性,并提议一个专用的测试觉察机制。不同比例的电路的实验结果显示,DeepTPI大大改进了与商业DFT工具的测试范围。在 https://devely/develop DFTTP 工作代码是可用的。