In this paper we present a natural language programming framework to consider how the fairness of acts can be measured. For the purposes of the paper, a fair act is defined as one that one would be accepting of if it were done to oneself. The approach is based on an implementation of the golden rule (GR) in the digital domain. Despite the GRs prevalence as an axiom throughout history, no transfer of this moral philosophy into computational systems exists. In this paper we consider how to algorithmically operationalise this rule so that it may be used to measure sentences such as: the boy harmed the girl, and categorise them as fair or unfair. A review and reply to criticisms of the GR is made. A suggestion of how the technology may be implemented to avoid unfair biases in word embeddings is made - given that individuals would typically not wish to be on the receiving end of an unfair act, such as racism, irrespective of whether the corpus being used deems such discrimination as praiseworthy.
翻译:在本文中,我们提出了一个自然语言的方案编制框架,以考虑如何衡量行为的公平性。为本文的目的,公平行为的定义是:如果对自己有所作为,人们会接受的公平行为。这一方法的基础是在数字领域执行黄金规则。尽管全球资源总则在整个历史中是一个轴心,但这一道德哲学并没有转移到计算系统中。在本文件中,我们考虑如何从逻辑上将这一规则付诸实施,以便用来衡量判决,例如:男孩伤害了女孩,将她们归类为公平或不公平。对《全球资源总分类》的批评意见作了审查和答复。提出了如何实施技术以避免语言嵌入中的不公平偏见的建议,因为个人通常不希望接受种族主义等不公平行为,而不论使用的物质是否认为这种歧视是值得称道的。