In this survey, we review methods that retrieve multimodal knowledge to assist and augment generative models. This group of works focuses on retrieving grounding contexts from external sources, including images, codes, tables, graphs, and audio. As multimodal learning and generative AI have become more and more impactful, such retrieval augmentation offers a promising solution to important concerns such as factuality, reasoning, interpretability, and robustness. We provide an in-depth review of retrieval-augmented generation in different modalities and discuss potential future directions. As this is an emerging field, we continue to add new papers and methods.
翻译:在这项调查中,我们回顾了检索多模态知识来辅助和增强生成模型的方法。这些工作集中于从外部源(包括图片、代码、表格、图表和音频)中检索基础上下文。随着多模态学习和生成人工智能越来越具有影响力,这种检索增强提供了一个有希望的解决方案,以解决重要问题,如真实性、推理、可解释性和鲁棒性等。我们对不同模态下的检索增强生成进行了深入的回顾,并讨论了潜在的未来发展方向。由于这是一个新兴领域,我们将不断添加新的论文和方法。