Neural compression offers a domain-agnostic approach to creating codecs for lossy or lossless compression via deep generative models. For sequence compression, however, most deep sequence models have costs that scale with the sequence length rather than the sequence complexity. In this work, we instead treat data sequences as observations from an underlying continuous-time process and learn how to efficiently discretize while retaining information about the full sequence. As a consequence of decoupling sequential information from its temporal discretization, our approach allows for greater compression rates and smaller computational complexity. Moreover, the continuous-time approach naturally allows us to decode at different time intervals. We empirically verify our approach on multiple domains involving compression of video and motion capture sequences, showing that our approaches can automatically achieve reductions in bit rates by learning how to discretize.
翻译:神经压缩提供了一种通过深层基因化模型创建丢失或无损压缩编码的域不可知性方法。 但是,对于序列压缩,大多数深序列模型的成本与序列长度相仿,而不是序列复杂性。在这项工作中,我们将数据序列视为一个潜在的连续时间过程的观测,并学习如何在保留完整序列信息的同时有效地分解。由于将连续信息与时间分解脱开,我们的方法允许更高的压缩速度和较小的计算复杂性。此外,连续时间方法自然允许我们在不同的时间间隔中解码。我们用经验核查了我们在涉及视频和运动抓取序列压缩的多个领域的做法,表明我们的方法可以通过学习如何分解而自动降低位速率。