In recent times, there have been increasing accusations on artificial intelligence systems and algorithms of computer vision of possessing implicit biases. Even though these conversations are more prevalent now and systems are improving by performing extensive testing and broadening their horizon, biases still do exist. One such class of systems where bias is said to exist is facial recognition systems, where bias has been observed on the basis of gender, ethnicity, skin tone and other facial attributes. This is even more disturbing, given the fact that these systems are used in practically every sector of the industries today. From as critical as criminal identification to as simple as getting your attendance registered, these systems have gained a huge market, especially in recent years. That in itself is a good enough reason for developers of these systems to ensure that the bias is kept to a bare minimum or ideally non-existent, to avoid major issues like favoring a particular gender, race, or class of people or rather making a class of people susceptible to false accusations due to inability of these systems to correctly recognize those people.
翻译:近些年来,人们越来越多地指责人工智能系统和计算机视觉算法存在隐含偏见。尽管这些对话现在更加普遍,而且系统通过进行广泛的测试和扩大其视野而正在改进,但偏见仍然存在。 据说存在偏见的一类系统是面部识别系统,根据性别、族裔、肤色和其他面部特征观察到偏见。更令人不安的是,鉴于这些系统实际上在当今行业的每个行业都使用这些系统。从犯罪识别到登记你的出勤等简单易行,这些系统已经赢得了巨大的市场,特别是近年来。这本身就足以成为这些系统开发者确保偏见保持在最低限度或理想的不存在的充足理由,以避免出现诸如偏爱特定性别、种族或人阶级等重大问题,或使一类人容易受到虚假指控,因为这些系统无法正确识别这些人。