Cross-sectional strategies are a classical and popular trading style, with recent high performing variants incorporating sophisticated neural architectures. While these strategies have been applied successfully to data-rich settings involving mature assets with long histories, deploying them on instruments with limited samples generally produce over-fitted models with degraded performance. In this paper, we introduce Fused Encoder Networks -- a novel and hybrid parameter-sharing transfer ranking model. The model fuses information extracted using an encoder-attention module operated on a source dataset with a similar but separate module focused on a smaller target dataset of interest. This mitigates the issue of models with poor generalisability that are a consequence of training on scarce target data. Additionally, the self-attention mechanism enables interactions among instruments to be accounted for, not just at the loss level during model training, but also at inference time. Focusing on momentum applied to the top ten cryptocurrencies by market capitalisation as a demonstrative use-case, the Fused Encoder Networks outperforms the reference benchmarks on most performance measures, delivering a three-fold boost in the Sharpe ratio over classical momentum as well as an improvement of approximately 50% against the best benchmark model without transaction costs. It continues outperforming baselines even after accounting for the high transaction costs associated with trading cryptocurrencies.
翻译:跨部门战略是一种传统和流行的贸易风格,其最新高性能的变异模式包含先进的神经结构。虽然这些战略已经成功地适用于涉及成熟资产且历史悠久的成熟资产的数据丰富环境,但将这些战略部署在具有有限样本的仪器上,通常会产生超装性能退化的模型。在本文中,我们引入了Fused Encoder 网络 -- -- 一种新型和混合参数共享转移排名模式。模型结合了利用一个源数据集运行的编码者注意模块提取的信息,该模块以一个类似但独立的模块为主,侧重于一个较小的目标数据集。这缓解了由于就稀缺的目标数据进行培训而导致的不通用性差模型问题。此外,自我注意机制使工具之间的相互作用得以进行核算,不仅仅是在模型培训期间的损失水平上,而且是在回溯时间上。侧重于通过市场资本化作为示范性使用实例对前十个顶级的加密密码,而使用的编码网络超越了大多数业绩计量的参照基准。它比大多数业绩衡量标准更差,在难的目标数据上提供了三倍的模型提升,这是对原始交易成本的精确比重比重率,在50年之后继续维持了交易基准。