Convolutional neural networks (CNNs) are currently among the most widely-used neural networks available and achieve state-of-the-art performance for many problems. While originally applied to computer vision tasks, CNNs work well with any data with a spatial relationship, besides images, and have been applied to different fields. However, recent works have highlighted how CNNs, like other deep learning models, are sensitive to noise injection which can jeopardise their performance. This paper quantifies the numerical uncertainty of the floating point arithmetic inaccuracies of the inference stage of DeepGOPlus, a CNN that predicts protein function, in order to determine its numerical stability. In addition, this paper investigates the possibility to use reduced-precision floating point formats for DeepGOPlus inference to reduce memory consumption and latency. This is achieved with Monte Carlo Arithmetic, a technique that experimentally quantifies floating point operation errors and VPREC, a tool that emulates results with customizable floating point precision formats. Focus is placed on the inference stage as it is the main deliverable of the DeepGOPlus model that will be used across environments and therefore most likely be subjected to the most amount of noise. Furthermore, studies have shown that the inference stage is the part of the model which is most disposed to being scaled down in terms of reduced precision. All in all, it has been found that the numerical uncertainty of the DeepGOPlus CNN is very low at its current numerical precision format, but the model cannot currently be reduced to a lower precision that might render it more lightweight.
翻译:革命神经网络(CNNs)目前是最广泛使用的神经网络之一,在很多问题上达到最先进的性能。 虽然CNN最初应用于计算机视觉任务,但CNN与任何带有空间关系的数据(除了图像之外)运作良好,并应用到不同领域。然而,最近的工作突出显示CNN与其他深学习模型一样,对噪音注入敏感,从而可能损害其性能。本文量化了DiepGOPlus(一个预测蛋白质功能的CNN CNN ) 的感知阶段中浮点算术不准确性的数字不确定性,以决定其数值稳定性。此外,本文还调查了使用任何带有空间关系的数据,除了图像外,还有图像外,如何使用降低精度浮动点浮点操作错误,而VPREC(一个与可定制的浮点精确度不精确度相仿的工具) 。 聚焦于这一阶段,因为它是当前最精确度的主要交付阶段,因此,在深度的精确度上,其所有精确度都显示在最精确度中。