We compare two quantum sequence models, QLSTM and QFWP, under an Equal Parameter Count (EPC) and adjoint differentiation setup on daily EUR USD forecasting as a controlled one dimensional time series case study. Across 10 random seeds and batch sizes from 4 to 64, we measure component wise runtimes including train forward, backward, full train, and inference, as well as accuracy including RMSE and directional accuracy. Batched forward scales well by about 2.2 to 2.4 times, but backward scales modestly, with QLSTM about 1.01 to 1.05 times and QFWP about 1.18 to 1.22 times, which caps end to end training speedups near 2 times. QFWP achieves lower RMSE and higher directional accuracy at all batch sizes, supported by a Wilcoxon test with p less than or equal to 0.004 and a large Cliff delta, while QLSTM reaches the highest throughput at batch size 64, revealing a clear speed accuracy Pareto frontier. We provide an EPC aligned, numerically checked benchmarking pipeline and practical guidance on batch size choices, while broader datasets and hardware and noise settings are left for future work.
翻译:本研究在等参数数量(EPC)和伴随微分框架下,以每日欧元/美元汇率预测作为受控的一维时间序列案例,比较了两种量子序列模型 QLSTM 与 QFWP。在 10 个随机种子和批量大小从 4 到 64 的范围内,我们测量了各组件运行时间,包括训练前向传播、反向传播、完整训练和推理,以及包括均方根误差(RMSE)和方向准确率在内的精度指标。批量化前向传播的扩展性良好,约为 2.2 至 2.4 倍,但反向传播的扩展性较为有限,QLSTM 约为 1.01 至 1.05 倍,QFWP 约为 1.18 至 1.22 倍,这导致端到端训练加速比上限约为 2 倍。在所有批量大小下,QFWP 均实现了更低的 RMSE 和更高的方向准确率,这一结论得到了 p 值小于等于 0.004 的 Wilcoxon 检验和较大的 Cliff delta 值的支持。而 QLSTM 在批量大小为 64 时达到了最高的吞吐量,揭示了一条清晰的速度-精度帕累托前沿。我们提供了一个 EPC 对齐且经过数值验证的基准测试流程,并给出了关于批量大小选择的实用指导。更广泛的数据集、硬件及噪声环境设置则留待未来工作探讨。