Crowdsourcing encompasses everything from large collaborative projects to microtasks performed in parallel and at scale. However, understanding subjective preferences can still be difficult: a majority of problems do not have validated questionnaires and pairwise comparisons do not scale, even with access to the crowd. Furthermore, in daily life we are used to expressing opinions as critiques (e.g. it was too cold, too spicy, too big), rather than describing precise preferences or choosing between (perhaps equally bad) discrete options. Unfortunately, it is difficult to analyze such qualitative feedback, especially when we want to make quantitative decisions. In this article, we present collective criticism, a crowdsourcing approach where users provide feedback to microtasks in the form of critiques, such as "it was too easy/too challenging". This qualitative feedback is used to perform quantitative analysis of users' preferences and opinions. Collective criticism has several advantages over other approaches: "too much/too little"-style critiques are easy for users to provide and it allows us to build predictive models for the optimal parameterization of the variables being critiqued. We present two case studies where we model: (i) aesthetic preferences in neural style transfer and (ii) hedonic experiences in the video game Tetris. These studies demonstrate the flexibility of our approach, and show that it produces robust results that are straightforward for experimenters to interpret and inline with users' stated preferences.
翻译:从大型合作项目到平行和规模执行的微观任务,都包含从大型合作项目到从大型合作项目到微观任务等所有的一切。然而,理解主观偏好可能仍然很困难:大多数问题都没有经过验证的问卷调查,即使与人群接触,对口比较也不规模化。此外,在日常生活中,我们常常以批评(例如太冷、太辣、太大)来表达观点,而不是描述精确的偏好或选择(可能同样糟糕的)离散选项。不幸的是,很难分析这种质量反馈,特别是当我们想作出定量决定时。在本篇文章中,我们提出集体批评,一种众包办法,即用户以批评的形式向微观任务提供反馈,例如“太容易/太有挑战性”等。此外,在日常生活中,这种定性反馈被用来对用户的偏好和观点进行定量分析。集体批评对于用户来说,“太多/太少”式的批评是容易提供的,而且让我们能够建立预测模型模型模型模型,用以对正在批评的变量进行最佳的参数化。我们用两个案例研究研究,我们用这种模型来展示了一种直截面式的模型来展示我们游戏式的实验性的研究结果。