Real-world image captions often lack contextual depth, omitting crucial details such as event background, temporal cues, outcomes, and named entities that are not visually discernible. This gap limits the effectiveness of image understanding in domains like journalism, education, and digital archives, where richer, more informative descriptions are essential. To address this, we propose a multimodal pipeline that augments visual input with external textual knowledge. Our system retrieves semantically similar images using BEIT-3 (Flickr30k-384 and COCO-384) and SigLIP So-384, reranks them using ORB and SIFT for geometric alignment, and extracts contextual information from related articles via semantic search. A fine-tuned Qwen3 model with QLoRA then integrates this context with base captions generated by Instruct BLIP (Vicuna-7B) to produce event-enriched, context-aware descriptions. Evaluated on the OpenEvents v1 dataset, our approach generates significantly more informative captions compared to traditional methods, showing strong potential for real-world applications requiring deeper visual-textual understanding
翻译:现实世界中的图像描述往往缺乏上下文深度,忽略了诸如事件背景、时间线索、结果以及无法通过视觉辨别的命名实体等关键细节。这一缺陷限制了图像理解在新闻、教育和数字档案等领域的有效性,而这些领域恰恰需要更丰富、信息量更大的描述。为解决此问题,我们提出了一种多模态流程,利用外部文本知识增强视觉输入。我们的系统使用BEIT-3(基于Flickr30k-384和COCO-384)和SigLIP So-384检索语义相似的图像,利用ORB和SIFT进行几何对齐重排序,并通过语义搜索从相关文章中提取上下文信息。随后,一个经过QLoRA微调的Qwen3模型将此上下文与由Instruct BLIP(Vicuna-7B)生成的基础描述相结合,以产生事件内容丰富、具有上下文感知的描述。在OpenEvents v1数据集上的评估表明,与传统方法相比,我们的方法能生成信息量显著更丰富的描述,在需要更深层次视觉-文本理解的实际应用中展现出巨大潜力。