We consider the learning of multi-agent Hawkes processes, a model containing multiple Hawkes processes with shared endogenous impact functions and different exogenous intensities. In the framework of stochastic maximum likelihood estimation, we explore the associated risk bound. Further, we consider the superposition of Hawkes processes within the model, and demonstrate that under certain conditions such an operation is beneficial for tightening the risk bound. Accordingly, we propose a stochastic optimization algorithm assisted with a diversity-driven superposition strategy, achieving better learning results with improved convergence properties. The effectiveness of the proposed method is verified on synthetic data, and its potential to solve the cold-start problem of sequential recommendation systems is demonstrated on real-world data.
翻译:我们考虑了多试剂霍克斯工艺的学习,这是一个包含多个霍克斯工艺的模型,这些工艺具有共同的内生影响功能和不同的外生强度。在随机性最大可能性估计的框架内,我们探索相关的风险。此外,我们考虑霍克斯工艺在模型中的叠加,并表明在某些条件下,此类操作有利于收紧风险约束。因此,我们提出一种随机优化算法,以多样性驱动的叠加策略协助其实现更好的学习结果,同时改进趋同特性。拟议方法的有效性在合成数据上得到验证,其解决相继建议系统冷启动问题的潜力在现实世界数据上得到证明。