In this concept paper, we discuss intricacies of specifying and verifying the quality of continuous and lifelong learning artificial intelligence systems as they interact with and influence their environment causing a so-called concept drift. We signify a problem of implicit feedback loops, demonstrate how they intervene with user behavior on an exemplary housing prices prediction system. Based on a preliminary model, we highlight conditions when such feedback loops arise and discuss possible solution approaches.
翻译:在本概念文件中,我们讨论了在持续和终生学习的人工情报系统与环境相互作用和影响其环境造成所谓的概念漂移时,如何具体说明和核查这些系统的质量的复杂问题。我们提出一个隐含的反馈循环问题,说明它们如何干预用户在模范房价预测系统上的行为。我们根据一个初步模型,强调出现这种反馈循环的条件,并讨论可能的解决办法。