This paper addresses learning end-to-end models for time series data that include a temporal alignment step via dynamic time warping (DTW). Existing approaches to differentiable DTW either differentiate through a fixed warping path or apply a differentiable relaxation to the min operator found in the recursive steps used to solve the DTW problem. We instead propose a DTW layer based around bi-level optimisation and deep declarative networks, which we name DecDTW. By formulating DTW as a continuous, inequality constrained optimisation problem, we can compute gradients for the solution of the optimal alignment (with respect to the underlying time series) using implicit differentiation. An interesting byproduct of this formulation is that DecDTW outputs the optimal warping path between two time series as opposed to a soft approximation, recoverable from Soft-DTW. We show that this property is particularly useful for applications where downstream loss functions are defined on the optimal alignment path itself. This naturally occurs, for instance, when learning to improve the accuracy of predicted alignments against ground truth alignments. We evaluate DecDTW on two such applications, namely the audio-to-score alignment task in music information retrieval and the visual place recognition task in robotics, demonstrating state-of-the-art results in both.
翻译:本文解决了包含动态时间归一化(DTW)的时间序列数据的端到端学习模型的问题。现有的可微分DTW方法要么通过固定弯曲路径进行微分,要么将可微松弛应用于递归步骤中的min算子,这些递归步骤用于解决DTW问题。相比之下,我们提出了一个基于双层优化和深度声明性网络的DTW层,称为DecDTW。通过将DTW公式化为一个连续、不等式约束的优化问题,我们可以使用隐式微分计算解决最优对齐的梯度(关于潜在的时间序列)。这种公式化的一个有趣的副产品是,DecDTW输出两个时间序列之间的最优弯曲路径,而不是可从软DTW中恢复的软逼近。我们表明,对于下游损失函数定义在最优对齐路径本身的应用,这个属性是特别有用的。当学习改善与地面真实对齐的预测对齐时,这自然发生。我们评估DecDTW在两个这样的应用上,即音乐信息检索中音频到分数的对齐任务和机器人视觉位置识别任务中,证明了两者的最新结果。