Online crowdsourcing platforms have made it increasingly easy to perform evaluations of algorithm outputs with survey questions like ``which image is better, A or B?'', leading to their proliferation in vision and graphics research papers. Results of these studies are often used as quantitative evidence in support of a paper's contributions. We argue that, when conducted hastily as an afterthought, such studies lead to an increase of uninformative, and in some cases, misleading conclusions. We call for increased attention to both the design and reporting of user studies in computer vision and graphics papers towards (1) improved replicability and (2) improved project direction. Together with this call, we offer an overview of methodologies from user experience research (UXR), human-computer interaction (HCI), and related fields to increase exposure to the available methodologies and best practices. We discuss foundational user research methods (e.g., needfinding) that are presently underutilized in computer vision and graphics research, but can provide valuable project direction. We provide further pointers to the literature for readers interested in exploring other UXR methodologies. Finally, we describe broader open issues and recommendations for the research community. We encourage authors and reviewers alike to recognize where in the project timeline a user study would be most informative, that not every research contribution requires a user study, and that a misguided emphasis on user studies can incentivise perfunctory studies.
翻译:在线众包平台使得对算法产出的评价越来越容易,其调查问题包括“哪些图像更好,A还是B?”,导致其出现在视觉和图形研究论文中。这些研究的结果往往用作数量证据,以支持论文的贡献。我们争辩说,当事后匆忙地进行这种研究时,就会增加信息不足,在某些情况下,得出误导性的结论。我们呼吁更多地注意计算机视觉和图形文件中用户研究的设计与报告,以便(1) 改进可复制性,(2) 改进项目方向。除了这一呼吁外,我们还概述了用户经验研究(UXR)、人-计算机互动(HCI)和相关领域的方法,以增加对现有方法和最佳做法的接触。我们讨论基础用户研究方法(例如需要调查)时,这种研究在计算机视觉和图形研究中没有得到充分利用,但可以提供宝贵的项目方向。我们向有兴趣探索其他UXR方法的读者提供进一步的提示。最后,我们向研究界介绍更广泛的公开问题和建议。我们鼓励用户研究中的大多数研究人员和研究人员都认识到,在每一研究中需要一个信息化研究时间表。