In recent years, machine learning has become prevalent in numerous tasks, including algorithmic trading. Stock market traders utilize machine learning models to predict the market's behavior and execute an investment strategy accordingly. However, machine learning models have been shown to be susceptible to input manipulations called adversarial examples. Despite this risk, the trading domain remains largely unexplored in the context of adversarial learning. In this study, we present a realistic scenario in which an attacker influences algorithmic trading systems by using adversarial learning techniques to manipulate the input data stream in real time. The attacker creates a universal perturbation that is agnostic to the target model and time of use, which, when added to the input stream, remains imperceptible. We evaluate our attack on a real-world market data stream and target three different trading algorithms. We show that when added to the input stream, our perturbation can fool the trading algorithms at future unseen data points, in both white-box and black-box settings. Finally, we present various mitigation methods and discuss their limitations, which stem from the algorithmic trading domain. We believe that these findings should serve as an alert to the finance community about the threats in this area and promote further research on the risks associated with using automated learning models in the trading domain.
翻译:近年来,机器学习在包括算法交易在内的许多任务中变得十分普遍。股票市场交易商利用机器学习模型预测市场行为并相应执行投资战略。然而,机器学习模型被证明容易被称为对抗性实例的投入操纵。尽管存在这种风险,贸易领域在对抗性学习的背景下基本上仍未探索。在本研究中,我们提出了一个现实的情景,即攻击者利用对抗性学习技术实时操纵输入数据流,从而影响算法交易系统。攻击者创造了一种对目标模型和使用时间具有不可知性的全局扰动模型,如果添加到输入流中,这种模型仍然难以察觉。我们评估了我们对真实世界市场数据流的冲击,并以三种不同的交易算法为目标。我们表明,在输入流中,我们的扰动可以愚弄未来隐形数据点的交易算法,在白箱和黑箱设置中。最后,我们提出了各种缓解方法,并讨论了其局限性,这些限制来自算法交易域域。我们认为,这些发现应该作为社区金融风险的预警,在交易领域进一步学习。