Although guidelines for human-AI interaction (HAI) are providing important advice on how to help improve user experiences with AI products, little is known about HAI for diverse users' experiences with such systems. Without understanding how diverse users' experiences with AI products differ, designers lack information they need to make AI products that serve users equitably. To investigate, we disaggregated data from 1,016 human participants according to five cognitive styles -- their attitudes toward risk, their motivations, their learning styles (by process vs. by tinkering), their information processing styles, and their computer self-efficacy. Our results revealed situations in which applying existing HAI guidelines helped these cognitively diverse participants equitably, where applying them helped participants inequitably, and where stubborn inequity problems persisted despite applying the guidelines.The results also revealed that these situations pervaded across 15 of the 16 experiments; and also that they arose for all five of the cognitive style spectra. Finally, the results revealed what the cognitive style disaggregation's impacts were by participants' demographics -- showing statistical clusterings not only by gender, but also clusterings for intersectional gender-age groups.
翻译:虽然人类-AI互动准则(HAI)为如何帮助用户改进AI产品的经验提供了重要建议,但对于HAI这类系统的各种用户的经验,却知之甚少。在不了解用户在AI产品方面的经验不同的情况下,设计师缺乏他们为使AI产品公平地为用户服务所需要的信息。为了调查,我们根据五个认知风格,即他们对风险的态度、动机、学习风格(通过工艺与修补)、信息处理风格和计算机的自我效能,将1,016名人类参与者的数据(通过过程与修补)、他们的信息处理风格和计算机的自我效能进行分解。我们的结果显示,应用HAI准则可以公平地帮助这些认知多样性的参与者,应用它们有助于不公平的参与者,尽管应用准则,但顽固的不平等问题依然存在。结果还显示,这些情况渗透了16个实验中的15个;还显示,所有5个认知风格的光谱都产生了数据。最后,结果揭示了认知样式分类对参与者的人口构成的影响 -- 不仅通过性别显示统计组合,而且还为交叉性别群体进行分组。