Work in computer science has established that, contrary to conventional wisdom, for a given prediction problem there are almost always multiple possible models with equivalent performance--a phenomenon often termed model multiplicity. Critically, different models of equivalent performance can produce different predictions for the same individual, and, in aggregate, exhibit different levels of impacts across demographic groups. Thus, when an algorithmic system displays a disparate impact, model multiplicity suggests that developers could discover an alternative model that performs equally well, but has less discriminatory impact. Indeed, the promise of model multiplicity is that an equally accurate, but less discriminatory algorithm (LDA) almost always exists. But without dedicated exploration, it is unlikely developers will discover potential LDAs. Model multiplicity and the availability of LDAs have significant ramifications for the legal response to discriminatory algorithms, in particular for disparate impact doctrine, which has long taken into account the availability of alternatives with less disparate effect when assessing liability. A close reading of legal authorities over the decades reveals that the law has on numerous occasions recognized that the existence of a less discriminatory alternative is sometimes relevant to a defendant's burden of justification at the second step of disparate impact analysis. Indeed, under disparate impact doctrine, it makes little sense to say that a given algorithmic system used by an employer, creditor, or housing provider is "necessary" if an equally accurate model that exhibits less disparate effect is available and possible to discover with reasonable effort. As a result, we argue that the law should place a duty of a reasonable search for LDAs on entities that develop and deploy predictive models in covered civil rights domains.
翻译:暂无翻译