In this thesis, several linear and non-linear machine learning attacks on optical physical unclonable functions (PUFs) are presented. To this end, a simulation of such a PUF is implemented to generate a variety of datasets that differ in several factors in order to find the best simulation setup and to study the behavior of the machine learning attacks under different circumstances. All datasets are evaluated in terms of individual samples and their correlations with each other. In the following, both linear and deep learning approaches are used to attack these PUF simulations and comprehensively investigate the impact of different factors on the datasets in terms of their security level against attackers. In addition, the differences between the two attack methods in terms of their performance are highlighted using several independent metrics. Several improvements to these models and new attacks will be introduced and investigated sequentially, with the goal of progressively improving modeling performance. This will lead to the development of an attack capable of almost perfectly predicting the outputs of the simulated PUF. In addition, data from a real optical PUF is examined and both compared to that of the simulation and used to see how the machine learning models presented would perform in the real world. The results show that all models meet the defined criterion for a successful machine learning attack.
翻译:在本论文中,介绍了几起线性和非线性机器对光学物理不克隆功能(PUFs)进行的若干直线和非线性机学习攻击,为此,对此类PUF进行模拟,以生成各种不同因素的不同数据集,以寻找最佳模拟设置,并研究不同情况下机器学习攻击的行为,所有数据集都按单个样本及其相互关系进行评价,然后用线性和深度学习方法攻击PUF模拟,全面调查不同因素对数据集在安全等级方面对攻击者的安全程度的影响,此外,还采用若干独立的尺度突出两种攻击方法在性能方面的差异,对这些模型和新攻击进行若干改进,并按顺序调查,目标是逐步改进模型性能,从而发展出一种能够几乎完全预测模拟PUF产出的攻击。此外,对真实的光学PUF数据进行了检查,并与模拟数据进行比较,并用来查看机器学习标准如何在现实世界中取得成功。