The exponentially increasing advances in robotics and machine learning are facilitating the transition of robots from being confined to controlled industrial spaces to performing novel everyday tasks in domestic and urban environments. In order to make the presence of robots safe as well as comfortable for humans, and to facilitate their acceptance in public environments, they are often equipped with social abilities for navigation and interaction. Socially compliant robot navigation is increasingly being learned from human observations or demonstrations. We argue that these techniques that typically aim to mimic human behavior do not guarantee fair behavior. As a consequence, social navigation models can replicate, promote, and amplify societal unfairness such as discrimination and segregation. In this work, we investigate a framework for diminishing bias in social robot navigation models so that robots are equipped with the capability to plan as well as adapt their paths based on both physical and social demands. Our proposed framework consists of two components: \textit{learning} which incorporates social context into the learning process to account for safety and comfort, and \textit{relearning} to detect and correct potentially harmful outcomes before the onset. We provide both technological and societal analysis using three diverse case studies in different social scenarios of interaction. Moreover, we present ethical implications of deploying robots in social environments and propose potential solutions. Through this study, we highlight the importance and advocate for fairness in human-robot interactions in order to promote more equitable social relationships, roles, and dynamics and consequently positively influence our society.
翻译:机器人和机器学习的飞速进步正在推动机器人从被限制在受控制的工业空间过渡到在国内和城市环境中执行新的日常任务。为了使机器人的存在对人类来说既安全又舒适,并且便于在公共环境中被接受,他们往往具备导航和互动的社会能力。符合社会要求的机器人导航正在越来越多地从人类的观察或演示中学习。我们争辩说,这些通常旨在模仿人类行为的技术并不能保证公平行为。因此,社会导航模型可以复制、促进和扩大社会不公平性,例如歧视和隔离。在这项工作中,我们调查一个减少社会机器人导航模型中偏见的框架,以便机器人具备根据物理和社会需求规划和调整其路径的能力。我们提议的框架包括两个组成部分:将社会背景纳入学习过程,以顾及安全和舒适,和文字学习。因此,我们提供技术和社会分析,利用三种不同的案例研究来减少社会机器人导航模式中的偏向性,从而在社会互动中提出我们社会互动的公平影响。此外,我们目前将社会环境的道德影响纳入学习过程之中,因此,在社会互动中,我们利用三个不同的社会模型分析,从而推导出人类的公平影响。