In this paper, we present findings from an semi-experimental exploration of rater diversity and its influence on safety annotations of conversations generated by humans talking to a generative AI-chat bot. We find significant differences in judgments produced by raters from different geographic regions and annotation platforms, and correlate these perspectives with demographic sub-groups. Our work helps define best practices in model development -- specifically human evaluation of generative models -- on the backdrop of growing work on sociotechnical AI evaluations.
翻译:在本文中,我们介绍了半实验性地探索比率多样性及其对人类与基因型AI-Chat机器人交谈产生的对话的安全说明的影响的结果。我们发现不同地理区域和批注平台的评分与不同地理区域和批注平台的评分有重大差异,并将这些观点与人口分组联系起来。我们的工作有助于确定在社会技术AI评价工作日益增加的背景下,在模型开发 -- -- 特别是人类对基因化模型的评估 -- -- 方面的最佳做法。