Much of the research on learning symbolic models of AI agents focuses on agents with stationary models. This assumption fails to hold in settings where the agent's capabilities may change as a result of learning, adaptation, or other post-deployment modifications. Efficient assessment of agents in such settings is critical for learning the true capabilities of an AI system and for ensuring its safe usage. In this work, we propose a novel approach to "differentially" assess black-box AI agents that have drifted from their previously known models. As a starting point, we consider the fully observable and deterministic setting. We leverage sparse observations of the drifted agent's current behavior and knowledge of its initial model to generate an active querying policy that selectively queries the agent and computes an updated model of its functionality. Empirical evaluation shows that our approach is much more efficient than re-learning the agent model from scratch. We also show that the cost of differential assessment using our method is proportional to the amount of drift in the agent's functionality.
翻译:关于学习AI代理商象征性模型的研究大多侧重于固定型模型的代理商。这一假设未能在代理商能力可能因学习、适应或其他部署后修改而发生变化的环境下保持。对此种环境下的代理商的有效评估对于学习AI系统的真正能力并确保其安全使用至关重要。在这项工作中,我们提出了一种“不同”评估黑箱AI代理商的新办法。作为一个起点,我们考虑了完全可观测和确定性的设置。我们利用对漂浮代理商当前行为和对其初始型号知识的微薄观察,以产生积极的询问政策,有选择地询问该代理商并计算其功能的最新模型。经验性评估表明,我们的方法比从零开始重新学习代理商模型的效率要高得多。我们还表明,使用我们的方法进行差异评估的成本与代理商功能的漂移量成正比。