Deep neural networks yield the state of the art results in many computer vision and human machine interface tasks such as object recognition, speech recognition etc. Since, these networks are computationally expensive, customized accelerators are designed for achieving the required performance at lower cost and power. One of the key building blocks of these neural networks is non-linear activation function such as sigmoid, hyperbolic tangent (tanh), and ReLU. A low complexity accurate hardware implementation of the activation function is required to meet the performance and area targets of the neural network accelerators. This paper presents an implementation of tanh function using the Catmull-Rom spline interpolation. State of the art results are achieved using this method with comparatively smaller logic area.
翻译:深神经网络在许多计算机视像和物体识别、语音识别等人体机器界面任务中产生最新结果。 由于这些网络计算成本昂贵,定制加速器的设计是为了以较低的成本和功率达到所要求的性能。这些神经网络的关键组成部分之一是非线性活化功能,如硅状、双曲正切(tanh)和RELU。激活功能需要低复杂度的硬件精确实施,才能达到神经网络加速器的性能和面积目标。本文介绍了利用Catmull-Rom 螺纹内插实现凝固功能的情况。 艺术成果的状态是在相对较小的逻辑区域使用这种方法实现的。