We propose a novel three-stage FIND-RESOLVE-LABEL workflow for crowdsourced annotation to reduce ambiguity in task instructions and thus improve annotation quality. Stage 1 (FIND) asks the crowd to find examples whose correct label seems ambiguous given task instructions. Workers are also asked to provide a short tag which describes the ambiguous concept embodied by the specific instance found. We compare collaborative vs. non-collaborative designs for this stage. In Stage 2 (RESOLVE), the requester selects one or more of these ambiguous examples to label (resolving ambiguity). The new label(s) are automatically injected back into task instructions in order to improve clarity. Finally, in Stage 3 (LABEL), workers perform the actual annotation using the revised guidelines with clarifying examples. We compare three designs for using these examples: examples only, tags only, or both. We report image labeling experiments over six task designs using Amazon's Mechanical Turk. Results show improved annotation accuracy and further insights regarding effective design for crowdsourced annotation tasks.
翻译:第一阶段(FIND)要求人群寻找正确的标签似乎模棱两可的例子。 还要求工人提供一个简短的标签,描述具体实例所体现的模糊概念。 我们比较了这一阶段的协作和非协作设计。 在第二阶段(RESOLVE),请求者选择了一个或多个模糊的示例作为标签(解决模糊性)。新的标签被自动注入任务指令,以便提高清晰度。最后,在第三阶段(LABEL),工人使用经修订的指南进行实际说明,并举例加以澄清。我们比较了使用这些示例的三种设计:仅举例,仅标注,或两者兼用。我们报告使用亚马逊机械土耳其语对六项任务设计进行图像标签试验。结果显示,在为众源说明任务进行有效设计方面,提高了说明的准确性和进一步的洞察力。