Video stabilization plays a central role to improve videos quality. However, despite the substantial progress made by these methods, they were, mainly, tested under standard weather and lighting conditions, and may perform poorly under adverse conditions. In this paper, we propose a synthetic-aware adverse weather robust algorithm for video stabilization that does not require real data and can be trained only on synthetic data. We also present Silver, a novel rendering engine to generate the required training data with an automatic ground-truth extraction procedure. Our approach uses our specially generated synthetic data for training an affine transformation matrix estimator avoiding the feature extraction issues faced by current methods. Additionally, since no video stabilization datasets under adverse conditions are available, we propose the novel VSAC105Real dataset for evaluation. We compare our method to five state-of-the-art video stabilization algorithms using two benchmarks. Our results show that current approaches perform poorly in at least one weather condition, and that, even training in a small dataset with synthetic data only, we achieve the best performance in terms of stability average score, distortion score, success rate, and average cropping ratio when considering all weather conditions. Hence, our video stabilization model generalizes well on real-world videos and does not require large-scale synthetic training data to converge.
翻译:视频稳定在提高视频质量方面发挥着核心作用。然而,尽管这些方法取得了长足的进步,但它们主要是在标准天气和照明条件下进行测试,在不利条件下可能表现不佳。在本文件中,我们提议为视频稳定提供合成的、有觉觉觉的不利天气稳健算法,这种算法不需要真实数据,只能进行合成数据培训。我们还提供了Silver,这是利用自动地面实况提取程序生成所需培训数据的一种新型合成引擎。我们的方法使用我们特别生成的合成数据来培训一个缩影转换矩阵估计器,避免当前方法所面临的特征提取问题。此外,由于在不利条件下没有可用的视频稳定数据集,我们建议采用新的VSAC105Real数据集进行评估。我们用两个基准将我们的方法与五种最先进的视频稳定算法进行比较。我们的结果显示,目前的方法至少在一种天气条件下运行不良,而且即使仅使用合成数据进行小型数据集培训,我们也在稳定平均得分、扭曲得分、成功率和平均作物比例方面取得最佳业绩。因此,在考虑所有天气条件时,我们的合成稳定模型一般培训要求我们进行真正的统一。