Indirect discrimination is an issue of major concern in algorithmic models. This is particularly the case in insurance pricing where protected policyholder characteristics are not allowed to be used for insurance pricing. Simply disregarding protected policyholder information is not an appropriate solution because this still allows for the possibility of inferring the protected characteristics from the non-protected ones. This leads to so-called proxy or indirect discrimination. Though proxy discrimination is qualitatively different from the group fairness concepts in machine learning, these group fairness concepts are proposed to 'smooth out' the impact of protected characteristics in the calculation of insurance prices. The purpose of this note is to share some thoughts about group fairness concepts in the light of insurance pricing and to discuss their implications. We present a statistical model that is free of proxy discrimination, thus, unproblematic from an insurance pricing point of view. However, we find that the canonical price in this statistical model does not satisfy any of the three most popular group fairness axioms. This seems puzzling and we welcome feedback on our example and on the usefulness of these group fairness axioms for non-discriminatory insurance pricing.
翻译:间接歧视是算法模型中一个主要关切的问题。在保险定价中,不允许将受保护的投保人的特点用于保险定价,这种情况尤为突出。仅仅无视受保护的投保人的信息是不适当的解决办法,因为这仍然允许从受保护的投保人身上推断受保护的特征。这导致所谓的代理或间接歧视。尽管代理歧视在质量上不同于机器学习中的集团公平概念,但提出这些集团公平概念是为了“缓解”保险价格计算中受保护特征的影响。本说明的目的是交流一些关于保险定价中群体公平概念的想法,并讨论其影响。我们提出了一个统计模型,没有代理歧视,因此,从保险定价的观点来看没有问题。然而,我们认为,这一统计模型中的罐头价并不能满足三种最受欢迎的集团的公平性。这似乎令人困惑,我们欢迎关于我们的例子以及这些集团公平性对不歧视保险定价的效用的反馈。