Similarity, or clone, detection has important applications in copyright violation, software theft, code search, and the detection of malicious components. There is now a good number of open source and proprietary clone detectors for programs written in traditional programming languages. However, the increasing adoption of deep learning models in software poses a challenge to these tools: these models implement functions that are inscrutable black boxes. As more software includes these DNN functions, new techniques are needed in order to assess the similarity between deep learning components of software. Previous work has unveiled techniques for comparing the representations learned at various layers of deep neural network models by feeding canonical inputs to the models. Our goal is to be able to compare DNN functions when canonical inputs are not available -- because they may not be in many application scenarios. The challenge, then, is to generate appropriate inputs and to identify a metric that, for those inputs, is capable of representing the degree of functional similarity between two comparable DNN functions. Our approach uses random input with values between -1 and 1, in a shape that is compatible with what the DNN models expect. We then compare the outputs by performing correlation analysis. Our study shows how it is possible to perform similarity analysis even in the absence of meaningful canonical inputs. The response to random inputs of two comparable DNN functions exposes those functions' similarity, or lack thereof. Of all the metrics tried, we find that Spearman's rank correlation coefficient is the most powerful and versatile, although in special cases other methods and metrics are more expressive. We present a systematic empirical study comparing the effectiveness of several similarity metrics using a dataset of 56,355 classifiers collected from GitHub. This is accompanied by a sensitivity analysis that reveals how certain models' training related properties affect the effectiveness of the similarity metrics. To the best of our knowledge, this is the first work that shows how similarity of DNN functions can be detected by using random inputs. Our study of correlation metrics, and the identification of Spearman correlation coefficient as the most powerful among them for this purpose, establishes a complete and practical method for DNN clone detection that can be used in the design of new tools. It may also serve as inspiration for other program analysis tasks whose approaches break in the presence of DNN components.
翻译:相似性或克隆性, 检测在版权侵犯、 软件盗窃、 代码搜索和恶意组件检测中都有重要的应用。 现在, 以传统编程语言编写的程序, 有大量开放源代码和专有的克隆检测器。 但是, 软件中越来越多地采用深层次的学习模式给这些工具带来了挑战: 这些模型会执行无法分辨的黑盒功能。 随着更多的软件包含这些 DNN 功能, 需要新技术来评估软件中深层学习组件之间的相似性。 先前的工作已经公布了用来比较在深层神经网络模型中通过向模型提供罐头输入而学到的演示的演示的演示数据。 我们的目标是在无法提供可比较的投入时, 能够比较 DNNN 函数的公开性, 因为他们可能在许多应用程序中, 生成适当的投入, 并且能够代表另外两个可比较的 DNN 函数的解析性 。 我们的方法使用随机的输入方式, 能够用来识别- 1 和 1 之间的数值, 来显示一个符合 DNN 模型所期望的方式。 我们然后通过进行专门的分析来比较特殊的输出数据, 。 进行这种对比性分析, 比较的输出, 也就是的解算算, 。