A typical optimization of customized accelerators for error-tolerant applications such as multimedia, recognition, and classification is to replace traditional arithmetic units like multipliers and adders with the approximate ones to enhance energy efficiency while adhering to accuracy requirements. However, the plethora of arithmetic units and diverse approximate unit options result in an exceedingly large design space. Therefore, there is a pressing need for an end-to-end design framework capable of navigating this intricate design space for approximation optimization. Traditional methods relying on simulation-based or blackbox model evaluations suffer from either high computational costs or limitations in accuracy and scalability, posing significant challenges to the optimization process. In this paper, we propose a Graph Neural Network (GNN) model that leverages the physical connections of arithmetic units to capture their influence on the performance, power, area (PPA), and accuracy of the accelerator. Particularly, we notice that critical path plays a key role in node feature of the GNN model and having it embedded in the feature vector greatly enhances the prediction quality of the models. On top of the models that allow rapid and efficient PPA and accuracy prediction of various approximate accelerator configurations, we can further explore the large design space effectively and build an end-to-end accelerator approximation framework named ApproxPilot to optimize the accelerator approximation. Our experimental results demonstrate that ApproxPilot outperforms state-of-the-art approximation optimization frameworks in both performance and hardware overhead with the same accuracy constraints.
翻译:暂无翻译