Vector-matrix-multiplication (VMM) accel-erators have gained a lot of traction, especially due to therise of convolutional neural networks (CNNs) and the desireto compute them on the edge. Besides the classical digitalapproach, analog computing has gone through a renais-sance to push energy efficiency further. A more recent ap-proach is called time-domain (TD) computing. In contrastto analog computing, TD computing permits easy technol-ogy as well as voltage scaling. As it has received limitedresearch attention, it is not yet clear which scenarios aremost suitable to be computed in the TD. In this work, weinvestigate these scenarios, focussing on energy efficiencyconsidering approximative computations that preserve ac-curacy. Both goals are addressed by a novel efficiency met-ric, which is used to find a baseline design. We use SPICEsimulation data which is fed into a python framework toevaluate how performance scales for VMM computation.We see that TD computing offers best energy efficiency forsmall to medium sized arrays. With throughput and sili-con footprint we investigate two additional metrics, givinga holistic comparison.
翻译:暂无翻译