Video matting aims to predict the alpha mattes for each frame from a given input video sequence. Recent solutions to video matting have been dominated by deep convolutional neural networks (CNN) for the past few years, which have become the de-facto standard for both academia and industry. However, they have inbuilt inductive bias of locality and do not capture global characteristics of an image due to the CNN-based architectures. They also lack long-range temporal modeling considering computational costs when dealing with feature maps of multiple frames. In this paper, we propose VMFormer: a transformer-based end-to-end method for video matting. It makes predictions on alpha mattes of each frame from learnable queries given a video input sequence. Specifically, it leverages self-attention layers to build global integration of feature sequences with short-range temporal modeling on successive frames. We further apply queries to learn global representations through cross-attention in the transformer decoder with long-range temporal modeling upon all queries. In the prediction stage, both queries and corresponding feature maps are used to make the final prediction of alpha matte. Experiments show that VMFormer outperforms previous CNN-based video matting methods on the composited benchmarks. To our best knowledge, it is the first end-to-end video matting solution built upon a full vision transformer with predictions on the learnable queries. The project is open-sourced at https://chrisjuniorli.github.io/project/VMFormer/
翻译:视频布局旨在从给定的输入视频序列中预测每个框架的alpha 配方。 近些年来,视频配方的解决方案一直以深演神经网络(CNN)为主, 成为学术界和产业界的“ facto 标准 ” 。 然而, 视频布局内在地貌偏差, 并且由于有线电视新闻网的架构, 无法捕捉图像的全球特征特征。 也缺乏长程时间模型, 处理多个框架的地貌图时, 考虑计算成本。 在本文中, 我们提议 VMFormer: 一种基于变压器端到端的视频配对制方法。 它预测每个框架的字母配对从可学习的查询中得出的字母配方。 具体地说, 它利用自我注意层来构建地段序列的全局性组合, 由短程时间模型制成。 我们进一步查询, 通过在变压器解码器解码器中进行跨时间模型模拟, 在所有查询中, 预测阶段, 查询和对应的地图图图都用来对每个框架进行最后的可读取的 ALmama- matfro 。 在前的变压变压模型中, IM 选择了我们之前的变压的变压模型的变压图中, 的变压图图图是用前的变压式的变压图图图图图中, 。