Artificial intelligence systems, which are designed with a capability to learn from the data presented to them, are used throughout society. These systems are used to screen loan applicants, make sentencing recommendations for criminal defendants, scan social media posts for disallowed content and more. Because these systems don't assign meaning to their complex learned correlation network, they can learn associations that don't equate to causality, resulting in non-optimal and indefensible decisions being made. In addition to making decisions that are sub-optimal, these systems may create legal liability for their designers and operators by learning correlations that violate anti-discrimination and other laws regarding what factors can be used in different types of decision making. This paper presents the use of a machine learning expert system, which is developed with meaning-assigned nodes (facts) and correlations (rules). Multiple potential implementations are considered and evaluated under different conditions, including different network error and augmentation levels and different training levels. The performance of these systems is compared to random and fully connected networks.
翻译:这些系统用于筛选贷款申请人,为刑事被告提出判刑建议,扫描社交媒体职位,使其内容不被允许,等等。由于这些系统没有赋予其复杂的、学到的关联网络以意义,它们可以学习不等同于因果关系的协会,导致作出非最佳和不可辩驳的决定。这些系统除了作出次优的决定外,还可能通过学习违反反歧视和其他法律的关联关系,了解在不同类型的决策中可以使用哪些因素,从而给其设计者和操作者带来法律责任。本文介绍了使用机器学习专家系统的情况,该系统是用带有含意的节点(事实)和相关性(规则)开发的。在不同的条件下,包括不同的网络错误和增强程度以及不同的培训水平,考虑和评估多种潜在的实施情况。这些系统的运行情况与随机和完全连接的网络相比较。