The task of dynamic scene graph generation (SGG) from videos is complicated and challenging due to the inherent dynamics of a scene, temporal fluctuation of model predictions, and the long-tailed distribution of the visual relationships in addition to the already existing challenges in image-based SGG. Existing methods for dynamic SGG have primarily focused on capturing spatio-temporal context using complex architectures without addressing the challenges mentioned above, especially the long-tailed distribution of relationships. This often leads to the generation of biased scene graphs. To address these challenges, we introduce a new framework called TEMPURA: TEmporal consistency and Memory Prototype guided UnceRtainty Attenuation for unbiased dynamic SGG. TEMPURA employs object-level temporal consistencies via transformer-based sequence modeling, learns to synthesize unbiased relationship representations using memory-guided training, and attenuates the predictive uncertainty of visual relations using a Gaussian Mixture Model (GMM). Extensive experiments demonstrate that our method achieves significant (up to 10% in some cases) performance gain over existing methods highlighting its superiority in generating more unbiased scene graphs.
翻译:动态场景图像生成的无偏见方法
在视频中生成动态场景图生成(SGG)任务由于场景的固有动态性,模型预测的时间波动以及视觉关联关系的长尾分布,配合图像场景图生成中本身的挑战而变得复杂和具有挑战性。现有的动态SGG方法主要集中于使用复杂的架构捕获时空上下文,未解决上述挑战尤其是关系的长尾分布问题。这通常会导致偏见的情景图产生。为了解决这些挑战,我们提出了一种新的框架 TEMPURA:基于时间一致性和记忆原型的无偏见动态SGG. TEMPURA通过基于变压器的序列建模实现对象级时间一致性,学习使用记忆引导训练合成无偏见的关系表示,并使用高斯混合模型(GMM)减弱视觉关系的预测不确定性。广泛的实验表明,我们的方法相对于现有方法实现了显著的(在某些情况下高达10%)性能增益,凸显其在生成更无偏见的场景图上的优越性。