Deep learning is a crucial aspect of machine learning, but it also makes these techniques vulnerable to adversarial examples, which can be seen in a variety of applications. These examples can even be targeted at humans, leading to the creation of false media, such as deepfakes, which are often used to shape public opinion and damage the reputation of public figures. This article will explore the concept of adversarial examples, which are comprised of perturbations added to clean images or videos, and their ability to deceive DL algorithms. The proposed approach achieved a precision value of accuracy of 76.2% on the DFDC dataset.
翻译:深层学习是机器学习的一个重要方面,但它也使得这些技术易受对抗性实例的影响,这可以从各种应用中看出。这些例子甚至可以针对人类,导致制造假媒体,例如深假,常常用来塑造公众舆论和损害公众人物的声誉。本文章将探讨对抗性实例的概念,其中包括在清洁图像或视频中添加扰动,以及它们欺骗DL算法的能力。拟议方法在DFDC数据集中达到了76.2%的精确值。