Forecasting tasks surrounding the dynamics of low-level human behavior are of significance to multiple research domains. In such settings, methods for explaining specific forecasts can enable domain experts to gain insights into the predictive relationships between behaviors. In this work, we introduce and address the following question: given a probabilistic forecasting model how can we identify observed windows that the model considers salient when making its forecasts? We build upon a general definition of information-theoretic saliency grounded in human perception and extend it to forecasting settings by leveraging a crucial attribute of the domain: a single observation can result in multiple valid futures. We propose to express the saliency of an observed window in terms of the differential entropy of the resulting predicted future distribution. In contrast to existing methods that either require explicit training of the saliency mechanism or access to the internal states of the forecasting model, we obtain a closed-form solution for the saliency map for commonly used density functions in probabilistic forecasting. We empirically demonstrate how our framework can recover salient observed windows from head pose features for the sample task of speaking-turn forecasting using a synthesized conversation dataset.
翻译:围绕低水平人类行为动态的预测任务对于多个研究领域具有重要意义。 在这种环境下,解释具体预测的方法可以让域专家深入了解行为之间的预测关系。在这项工作中,我们提出并解决了以下问题:考虑到一个概率预测模型,我们如何确定模型在预测时认为显要的观测窗口?我们以基于人类感知的信息-理论显著性的一般定义为基础,并通过利用领域的关键属性将它扩展到预测环境:单一观测可以导致多重有效的未来。我们提议用预测未来分布的差别变形的精度来表达观测窗口的显著性。与需要明确培训显著机制或访问预测模型内部状态的现有方法相反,我们获得了一种封闭式的解决方案,用于预测性预测中常用密度功能的显眼性地图。我们从经验上展示了我们的框架如何从头部恢复观察到的突出窗口,为使用合成对话数据集进行语音转换预测的样本任务带来特征。