Collaborative AI systems aim at working together with humans in a shared space to achieve a common goal. This setting imposes potentially hazardous circumstances due to contacts that could harm human beings. Thus, building such systems with strong assurances of compliance with requirements domain specific standards and regulations is of greatest importance. Challenges associated with the achievement of this goal become even more severe when such systems rely on machine learning components rather than such as top-down rule-based AI. In this paper, we introduce a risk modeling approach tailored to Collaborative AI systems. The risk model includes goals, risk events and domain specific indicators that potentially expose humans to hazards. The risk model is then leveraged to drive assurance methods that feed in turn the risk model through insights extracted from run-time evidence. Our envisioned approach is described by means of a running example in the domain of Industry 4.0, where a robotic arm endowed with a visual perception component, implemented with machine learning, collaborates with a human operator for a production-relevant task.
翻译:合作性AI系统的目标是在共同空间与人类合作,以实现一个共同目标。这种设定由于接触而造成潜在危险的情况,可能伤害人类。因此,建立这种系统,并强有力地保证符合要求的具体领域标准和条例,是极其重要的。如果这种系统依赖机器学习组成部分而不是自上而下基于规则的AI,则与实现这一目标有关的挑战就变得更加严峻。在本文件中,我们采用了一种适合合作性AI系统的风险模型方法。风险模型包括可能使人类面临危险的目标、风险事件和具体领域指标。然后,利用风险模型推动保证方法,通过从运行时证据中提取的洞察力,将风险模式反过来纳入风险模式。我们设想的方法是通过在工业4.0领域的一个运行范例来描述我们设想的方法,在工业4.0领域,一个具有视觉认知组成部分的机械臂与机器学习一起实施,与人类操作者合作执行与生产有关的任务。