Due to the widespread use of data-powered systems in our everyday lives, the notions of bias and fairness gained significant attention among researchers and practitioners, in both industry and academia. Such issues typically emerge from the data, which comes with varying levels of quality, used to train systems. With the commercialization and employment of such systems that are sometimes delegated to make life-changing decisions, a significant effort is being made towards the identification and removal of possible sources of bias that may surface to the final end-user. In this position paper, we instead argue that bias is not something that should necessarily be removed in all cases, and the attention and effort should shift from bias removal to the identification, measurement, indexing, surfacing, and adjustment of bias, which we name bias management. We argue that if correctly managed, bias can be a resource that can be made transparent to the the users and empower them to make informed choices about their experience with the system.
翻译:由于在日常生活中广泛使用数据动力系统,偏见和公平概念在业界和学术界的研究人员和从业人员中引起极大注意,这些问题通常产生于数据,这些数据的质量不同,用于培训系统。随着这种系统商业化和使用,有时被授权作出改变生活的决定,正在作出重大努力,查明并消除可能浮出最后最终用户头上的可能偏见来源。在本立场文件中,我们相反认为,偏见并不是在所有情况下都必须消除的东西,注意力和努力应该从消除偏见转向识别、衡量、索引、冲浪和调整偏见,我们称之为偏见管理。我们争辩说,如果管理得当,偏见可以成为对用户具有透明度的资源,并使他们能够就其在系统中的经验作出知情的选择。