Guaranteeing the security of transactional systems is a crucial priority of all institutions that process transactions, in order to protect their businesses against cyberattacks and fraudulent attempts. Adversarial attacks are novel techniques that, other than being proven to be effective to fool image classification models, can also be applied to tabular data. Adversarial attacks aim at producing adversarial examples, in other words, slightly modified inputs that induce the Artificial Intelligence (AI) system to return incorrect outputs that are advantageous for the attacker. In this paper we illustrate a novel approach to modify and adapt state-of-the-art algorithms to imbalanced tabular data, in the context of fraud detection. Experimental results show that the proposed modifications lead to a perfect attack success rate, obtaining adversarial examples that are also less perceptible when analyzed by humans. Moreover, when applied to a real-world production system, the proposed techniques shows the possibility of posing a serious threat to the robustness of advanced AI-based fraud detection procedures.
翻译:保证交易系统的安全是所有处理交易的机构的关键优先事项,以保护其企业免遭网络攻击和欺诈企图。反向攻击是新颖的技术,除了被证明能有效愚弄图像分类模型之外,还可用于表格数据。反向攻击旨在产生对抗性例子,换句话说,是略有修改的投入,促使人工情报系统退回对攻击者有利的错误产出。在本文中,我们举例说明了一种新颖的方法,在欺诈检测方面,修改和调整最新算法,使其适应不平衡的表格数据。实验结果表明,拟议的修改会导致完美的攻击成功率,获得在人类分析时也不太容易察觉的对抗性例子。此外,在应用现实世界生产系统时,拟议的技术表明有可能严重威胁先进的人工欺诈检测程序的稳健性。