Point processes are a useful mathematical tool for describing events over time, and so there are many recent approaches for representing and learning them. One notable open question is how to precisely describe the flexibility of point process models and whether there exists a general model that can represent all point processes. Our work bridges this gap. Focusing on the widely used event intensity function representation of point processes, we provide a proof that a class of learnable functions can universally approximate any valid intensity function. The proof connects the well known Stone-Weierstrass Theorem for function approximation, the uniform density of non-negative continuous functions using a transfer functions, the formulation of the parameters of a piece-wise continuous functions as a dynamic system, and a recurrent neural network implementation for capturing the dynamics. Using these insights, we design and implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event. Evaluations on synthetic and real world datasets show that this simpler representation performs better than Hawkes process variants and more complex neural network-based approaches. We expect this result will provide a practical basis for selecting and tuning models, as well as furthering theoretical work on representational complexity and learnability.
翻译:点进程是描述时空事件的有用数学工具,因此有许多近期的表达和学习方法。一个值得注意的未决问题是如何准确描述点进程模型的灵活性,以及是否存在一个能够代表所有点进程的一般模型。我们的工作缩小了这一差距。侧重于广泛使用的事件强度函数代表点进程,我们提供了证据,证明一类可学习功能可以普遍地与任何有效的强度功能相近。证据将众所周知的斯通-Weierstrass Theorem连接在一起,将功能近似、非负性连续功能使用转移功能的统一密度、将片断连续功能作为动态系统制定参数,以及执行经常性神经网络以捕捉动态。利用这些洞见,我们设计和实施UNIPoint,这是一个新颖的神经点进程模型,利用经常性神经网络对每个事件的基础功能进行参数比较。合成和真实世界数据集的评估表明,这种简单化的表达方式比霍克斯进程变异和更为复杂的神经网络方法要好。我们期望这一结果将为选择和调整模型的复杂性提供一个实用基础,作为进一步学习的理论性工作。