Quality estimation (QE) -- the automatic assessment of translation quality -- has recently become crucial across several stages of the translation pipeline, from data curation to training and decoding. While QE metrics have been optimized to align with human judgments, whether they encode social biases has been largely overlooked. Biased QE risks favoring certain demographic groups over others, e.g., by exacerbating gaps in visibility and usability. This paper defines and investigates gender bias of QE metrics and discusses its downstream implications for machine translation (MT). Experiments with state-of-the-art QE metrics across multiple domains, datasets, and languages reveal significant bias. When a human entity's gender in the source is undisclosed, masculine-inflected translations score higher than feminine-inflected ones and gender-neutral translations are penalized. Even when contextual cues disambiguate gender, using context-aware QE metrics leads to more errors in picking the correct translation inflection for feminine than masculine referents. Moreover, a biased QE metric affects data filtering and quality-aware decoding. Our findings highlight the need for renewed focus in developing and evaluating QE metrics centered around gender.
翻译:暂无翻译