Video captioning aims to convey dynamic scenes from videos using natural language, facilitating the understanding of spatiotemporal information within our environment. Although there have been recent advances, generating detailed and enriched video descriptions continues to be a substantial challenge. In this work, we introduce Video ChatCaptioner, an innovative approach for creating more comprehensive spatiotemporal video descriptions. Our method employs a ChatGPT model as a controller, specifically designed to select frames for posing video content-driven questions. Subsequently, a robust algorithm is utilized to answer these visual queries. This question-answer framework effectively uncovers intricate video details and shows promise as a method for enhancing video content. Following multiple conversational rounds, ChatGPT can summarize enriched video content based on previous conversations. We qualitatively demonstrate that our Video ChatCaptioner can generate captions containing more visual details about the videos. The code is publicly available at https://github.com/Vision-CAIR/ChatCaptioner
翻译:视频字幕的目的是使用自然语言传达动态场景,以便更好地理解我们环境中的时空信息。虽然最近有了一些进展,但生成详细和丰富的视频描述仍然是一个重要的挑战。在本文中,我们介绍了一种创新的方法 Video ChatCaptioner,用于创建更全面的时空视频描述。我们的方法采用 ChatGPT 模型作为控制器,专门设计用于选择帧以提出视频内容驱动的问题。随后,采用强大的算法回答这些视觉查询。这种问答框架有效地揭示了复杂的视频细节,并显示出增强视频内容的方法的前景。在多个对话轮次之后,ChatGPT 可以根据以前的对话总结丰富的视频内容。我们定性地证明,我们的 Video ChatCaptioner 可以生成包含更多视频细节的字幕。该代码是公开可用的,网址为 https://github.com/Vision-CAIR/ChatCaptioner