Recently, machine learning methods have shown the prospects of stock trend forecasting. However, the volatile and dynamic nature of the stock market makes it difficult to directly apply machine learning techniques. Previous methods usually use the temporal information of historical stock price patterns to predict future stock trends, but the multi-scale temporal dependence of financial data and stable trading opportunities are still difficult to capture. The main problem can be ascribed to the challenge of recognizing the patterns of real profit signals from noisy information. In this paper, we propose a framework called Multiscale Temporal Memory Learning and Efficient Debiasing (MTMD). Specifically, through self-similarity, we design a learnable embedding with external attention as memory block, in order to reduce the noise issues and enhance the temporal consistency of the model. This framework not only aggregates comprehensive local information in each timestamp, but also concentrates the global important historical patterns in the whole time stream. Meanwhile, we also design the graph network based on global and local information to adaptively fuse the heterogeneous multi-scale information. Extensive ablation studies and experiments demonstrate that MTMD outperforms the state-of-the-art approaches by a significant margin on the benchmark datasets. The source code of our proposed method is available at https://github.com/MingjieWang0606/MDMT-Public.
翻译:最近,机器学习方法显示了股票趋势预测的前景,然而,由于股票市场的波动性和动态性质,很难直接应用机器学习技术。以往的方法通常使用历史股票价格模式的时间信息来预测未来股票趋势,但金融数据和稳定交易机会的多重时间依赖性仍然难以捕捉。主要问题可归因于如何认识来自噪音信息的真正利润信号模式的挑战。在本文件中,我们提议了一个称为“多尺度时间记忆学习和有效消除偏差”的框架。具体地说,通过自我相似性,我们设计了一个以外部注意力作为记忆块的可学习嵌入器,以减少噪音问题,提高模型的时间一致性。这个框架不仅综合了每个时间戳中全面的当地信息,而且还集中了整个时间流中的全球重要历史模式。与此同时,我们还根据全球和地方信息设计了图表网络,以适应性地融合多种规模信息。广泛的模拟研究和实验表明,MTMDMD超越了作为记忆块的外部注意力,以便减少模型的噪音问题,提高模型的时间一致性。这个框架不仅汇集了每个时间戳中全面的当地信息,而且还集中了整个时间流中的全球重要历史模式。与此同时,我们还根据全球和地方信息设计了图表网络将多种规模信息连接起来。广度研究和实验表明,MDMDMDMDMD在我们的基准源/MDMDMT在数据库中的重要间隔上拟议的数据设置。