In recent years, machine learning (ML) has come to rely more heavily on crowdworkers, both for building bigger datasets and for addressing research questions requiring human interaction or judgment. Owing to the diverse tasks performed by crowdworkers, and the myriad ways the resulting datasets are used, it can be difficult to determine when these individuals are best thought of as workers, versus as human subjects. These difficulties are compounded by conflicting policies, with some institutions and researchers treating all ML crowdwork as human subjects research, and other institutions holding that ML crowdworkers rarely constitute human subjects. Additionally, few ML papers involving crowdwork mention IRB oversight, raising the prospect that many might not be in compliance with ethical and regulatory requirements. In this paper, we focus on research in natural language processing to investigate the appropriate designation of crowdsourcing studies and the unique challenges that ML research poses for research oversight. Crucially, under the U.S. Common Rule, these judgments hinge on determinations of "aboutness", both whom (or what) the collected data is about and whom (or what) the analysis is about. We highlight two challenges posed by ML: (1) the same set of workers can serve multiple roles and provide many sorts of information; and (2) compared to the life sciences and social sciences, ML research tends to embrace a dynamic workflow, where research questions are seldom stated ex ante and data sharing opens the door for future studies to ask questions about different targets from the original study. In particular, our analysis exposes a potential loophole in the Common Rule, where researchers can elude research ethics oversight by splitting data collection and analysis into distinct studies. We offer several policy recommendations to address these concerns.
翻译:近些年来,机器学习(ML)越来越依赖人群工人,这既是为了建立更大的数据集,也是为了解决需要人际互动或判断的研究问题。 由于人群工人所执行的任务多种多样,以及由此产生的数据集的使用方式繁多,因此很难确定这些人何时被最最好地视为工人,而何时是人类主体。这些困难又由于政策冲突而变得更加复杂,有些机构和研究人员将ML众组工作视为人类主题研究,而其他机构则认为ML众组工人很少成为人类主题。 此外,涉及人群工作的文件很少提到移民和难民局的监督,这增加了许多人可能不符合道德和监管要求的前景。 在本文中,我们侧重于自然语言处理的研究,以调查对人群外包研究的适当指定,以及ML研究为研究监督带来的独特挑战。 在美国的通常规则下,这些判断取决于“关于人类主题的研究”的“关于谁(或什么)所收集的数据是公开的,以及谁(或是什么)分析是公开的。我们着重指出了“透明度”的两种挑战:一是低层次的监管:一是低层次的研究,一是前科学中的一项研究,而后一是不同的研究,而后一是分析,而后一是统计的关于这些研究可以将多少个研究,而后科学的关于这些研究中不同的研究可以成为多层次的研究。