More and more investors and machine learning models rely on social media (e.g., Twitter and Reddit) to gather real-time information and sentiment to predict stock price movements. Although text-based models are known to be vulnerable to adversarial attacks, whether stock prediction models have similar vulnerability is underexplored. In this paper, we experiment with a variety of adversarial attack configurations to fool three stock prediction victim models. We address the task of adversarial generation by solving combinatorial optimization problems with semantics and budget constraints. Our results show that the proposed attack method can achieve consistent success rates and cause significant monetary loss in trading simulation by simply concatenating a perturbed but semantically similar tweet.
翻译:越来越多的投资者和机器学习模式依靠社交媒体(例如Twitter和Reddit)收集实时信息和情绪来预测股票价格的变动。虽然人们知道基于文本的模式很容易受到对抗性攻击,但人们知道,股票预测模型是否具有类似的脆弱性,但探索不足。在本文中,我们尝试各种对抗性攻击配置来欺骗三种预测股票受害者的模式。我们通过解决组合优化问题和语言和预算限制来解决敌对性生成的任务。我们的结果显示,拟议的攻击方法可以实现一致的成功率,并在贸易模拟中造成巨大的货币损失,只需将一种反复但有语义相似的推文混在一起即可。