The integration of Large Language Models (LLMs) in social robotics presents a unique set of ethical challenges and social impacts. This research is set out to identify ethical considerations that arise in the design and development of these two technologies in combination. Using LLMs for social robotics may provide benefits, such as enabling natural language open-domain dialogues. However, the intersection of these two technologies also gives rise to ethical concerns related to misinformation, non-verbal cues, emotional disruption, and biases. The robot's physical social embodiment adds complexity, as ethical hazards associated with LLM-based Social AI, such as hallucinations and misinformation, can be exacerbated due to the effects of physical embodiment on social perception and communication. To address these challenges, this study employs an empirical design justice-based methodology, focusing on identifying socio-technical ethical considerations through a qualitative co-design and interaction study. The purpose of the study is to identify ethical considerations relevant to the process of co-design of, and interaction with a humanoid social robot as the interface of a LLM, and to evaluate how a design justice methodology can be used in the context of designing LLMs-based social robotics. The findings reveal a mapping of ethical considerations arising in four conceptual dimensions: interaction, co-design, terms of service and relationship and evaluates how a design justice approach can be used empirically in the intersection of LLMs and social robotics.
翻译:暂无翻译