Edge-AI applications still face considerable challenges in enhancing computational efficiency in resource-constrained environments. This work presents RAMAN, a resource-efficient and approximate posit(8,2)-based Multiply-Accumulate (MAC) architecture designed to improve hardware efficiency within bandwidth limitations. The proposed REAP (Resource-Efficient Approximate Posit) MAC engine, which is at the core of RAMAN, uses approximation in the posit multiplier to achieve significant area and power reductions with an impact on accuracy. To support diverse AI workloads, this MAC unit is incorporated in a scalable Vector Execution Unit (VEU), which permits hardware reuse and parallelism among deep neural network layers. Furthermore, we propose an algorithm-hardware co-design framework incorporating approximation-aware training to evaluate the impact of hardware-level approximation on application-level performance. Empirical validation on FPGA and ASIC platforms shows that the proposed REAP MAC achieves up to 46% in LUT savings and 35.66% area, 31.28% power reduction, respectively, over the baseline Posit Dot-Product Unit (PDPU) design, while maintaining high accuracy (98.45%) for handwritten digit recognition. RAMAN demonstrates a promising trade-off between hardware efficiency and learning performance, making it suitable for next-generation edge intelligence.
翻译:暂无翻译