Cybersecurity practices require effort to be maintained, and one weakness is a lack of awareness regarding potential attacks not only in the usage of machine learning models, but also in their development process. Previous studies have determined that preprocessing attacks, such as image scaling attacks, have been difficult to detect by humans (through visual response) and computers (through entropic algorithms). However, these studies fail to address the real-world performance and detectability of these attacks. The purpose of this work is to analyze the relationship between awareness of image scaling attacks with respect to demographic background and experience. We conduct a survey where we gather the subjects' demographics, analyze the subjects' experience in cybersecurity, record their responses to a poorly-performing convolutional neural network model that has been unknowingly hindered by an image scaling attack of a used dataset, and document their reactions after it is revealed that the images used within the broken models have been attacked. We find in this study that the overall detection rate of the attack is low enough to be viable in a workplace or academic setting, and even after discovery, subjects cannot conclusively determine benign images from attacked images.
翻译:暂无翻译