Artificial Intelligence (AI) is becoming more pervasive through all levels of society, trying to help us be more productive. Research like Amershi et al.'s 18 guidelines for human-AI interaction aim to provide high-level design advice, yet little remains known about how people react to Applications or Violations of the guidelines. This leaves a gap for designers of human-AI systems applying such guidelines, where AI-powered systems might be working better for certain sets of users than for others, inadvertently introducing inclusiveness issues. To address this, we performed a secondary analysis of 1,016 participants across 16 experiments, disaggregating their data by their 5 cognitive problem-solving styles from the Gender-inclusiveness Magnifier (GenderMag) method and illustrate different situations that participants found themselves in. We found that across all 5 cogniive style spectra, although there were instances where applying the guidelines closed inclusiveness issues, there were also stubborn inclusiveness issues and inadvertent introductions of inclusiveness issues. Lastly, we found that participants' cognitive styles not only clustered by their gender, but they also clustered across different age groups.
翻译:人工智能(AI)在社会各个层次都越来越普遍,试图帮助我们提高生产率。像Amershi等人的18项人类-人工智能互动准则这样的研究旨在提供高层次的设计建议,但对于人们如何对应用或违反该准则作出反应却知之甚少。这给应用该准则的人类-人工智能系统设计者留下了一个空白,在这种空白中,AI-动力系统对某些用户群体可能比其他人更有利,无意中引入包容性问题。为了解决这个问题,我们对16项实验的1,016名参与者进行了二次分析,根据5种解决性别包容性的认知问题风格将他们的数据从性别包容性放大器(GenderMag)方法中分解,并说明了参与者发现的不同情况。我们发现,在所有5种认知风格的光谱中,尽管有一些应用该准则封闭的包容性问题的例子,但也存在顽固的包容性问题和不小心引入包容性问题。最后,我们发现参与者的认知风格不仅因其性别而集中在一起,而且还集中在不同年龄组。