Personalized Visual Language Models (VLMs) are gaining increasing attention for their formidable ability in user-specific concepts aligned interactions (e.g., identifying a user's bike). Existing methods typically require the learning of separate embeddings for each new concept, which fails to support real-time adaptation during testing. This limitation becomes particularly pronounced in large-scale scenarios, where efficient retrieval of concept embeddings is not achievable. To alleviate this gap, we propose Online-PVLM, a framework for online concept learning by leveraging hyperbolic representations. Our approach makes a train-free paradigm for concept embeddings generation at test time, making the use of personalized VLMs both scalable and efficient. In addition, we develop OP-Eval, a comprehensive and large-scale benchmark comprising 1,292 concepts and over 30K high-quality instances with diverse question types, designed to rigorously assess online concept learning in realistic scenarios. Extensive experiments demonstrate the state-of-the-art performance of our proposed framework. Our source code and dataset will be made available.
翻译:个性化视觉语言模型(VLMs)因其在用户特定概念对齐交互(例如识别用户的自行车)方面的强大能力而受到越来越多的关注。现有方法通常需要为每个新概念学习单独的嵌入,这无法支持测试期间的实时适应。这一限制在大规模场景中尤为突出,因为无法实现概念嵌入的高效检索。为了缓解这一差距,我们提出了Online-PVLM,这是一个利用双曲表示进行在线概念学习的框架。我们的方法在测试时采用免训练范式生成概念嵌入,使得个性化VLMs的使用既具有可扩展性又高效。此外,我们开发了OP-Eval,一个全面且大规模的基准测试,包含1,292个概念和超过30K个高质量实例,涵盖多样的问题类型,旨在严格评估现实场景中的在线概念学习。大量实验证明了我们提出的框架达到了最先进的性能。我们的源代码和数据集将公开提供。