In recent years, various deep learning techniques have been exploited in side channel attacks, with the anticipation of obtaining more appreciable attack results. Most of them concentrate on improving network architectures or putting forward novel algorithms, assuming that there are adequate profiling traces available to train an appropriate neural network. However, in practical scenarios, profiling traces are probably insufficient, which makes the network learn deficiently and compromises attack performance. In this paper, we investigate a kind of data augmentation technique, called mixup, and first propose to exploit it in deep-learning based side channel attacks, for the purpose of expanding the profiling set and facilitating the chances of mounting a successful attack. We perform Correlation Power Analysis for generated traces and original traces, and discover that there exists consistency between them regarding leakage information. Our experiments show that mixup is truly capable of enhancing attack performance especially for insufficient profiling traces. Specifically, when the size of the training set is decreased to 30% of the original set, mixup can significantly reduce acquired attacking traces. We test three mixup parameter values and conclude that generally all of them can bring about improvements. Besides, we compare three leakage models and unexpectedly find that least significant bit model, which is less frequently used in previous works, actually surpasses prevalent identity model and hamming weight model in terms of attack results.
翻译:近些年来,各种深层次的学习技术在侧渠道袭击中被利用,预期会获得更显著的攻击结果。它们大多集中于改进网络结构或提出新的算法,假设有足够的特征分析痕迹来训练适当的神经网络。然而,在实际假设中,特征分析痕迹可能不够充分,使得网络学习不力和妥协攻击性能。在本文中,我们调查一种数据增强技术,即混合技术,并首先提议在深学习的侧渠道袭击中加以利用,目的是扩大特征分析集,促进成功攻击的可能性。我们为生成的痕迹和原始痕迹进行关联动力分析,发现它们之间在渗漏信息方面存在着一致性。我们的实验表明,混合确实能够提高攻击性能,特别是由于特征分析性能不足。具体地说,当训练组的规模缩小到最初设置的30%时,混合可以大大减少获得的攻击性痕迹。我们测试了三个混合参数值,并得出结论,它们一般都能带来改进。此外,我们比较了三个渗漏模型,并意外地发现模型中最小的位位数模型在以往工作中使用的特征上比重。