The data revolution has generated a huge demand for data-driven solutions. This demand propels a growing number of easy-to-use tools and training for aspiring data scientists that enable the rapid building of predictive models. Today, weapons of math destruction can be easily built and deployed without detailed planning and validation. This rapidly extends the list of AI failures, i.e. deployments that lead to financial losses or even violate democratic values such as equality, freedom and justice. The lack of planning, rules and standards around the model development leads to the ,,anarchisation of AI". This problem is reported under different names such as validation debt, reproducibility crisis, and lack of explainability. Post-mortem analysis of AI failures often reveals mistakes made in the early phase of model development or data acquisition. Thus, instead of curing the consequences of deploying harmful models, we shall prevent them as early as possible by putting more attention to the initial planning stage. In this paper, we propose a quick and simple framework to support planning of AI solutions. The POCA framework is based on four pillars: Performance, Opaqueness, Consequences, and Assumptions. It helps to set the expectations and plan the constraints for the AI solution before any model is built and any data is collected. With the help of the POCA method, preliminary requirements can be defined for the model-building process, so that costly model misspecification errors can be identified as soon as possible or even avoided. AI researchers, product owners and business analysts can use this framework in the initial stages of building AI solutions.
翻译:数据革命产生了对数据驱动解决方案的巨大需求。这一需求催生了越来越多的容易使用的工具和对有志于数据科学家的培训,从而能够迅速建立预测模型。今天,数学销毁武器可以很容易地建造和部署,而无需详细规划和验证。这迅速扩大了AI失败清单,即导致财政损失甚至违反平等、自由和正义等民主价值观的部署。在模型开发方面缺乏规划、规则和标准,导致AI的分类。这个问题在验证债务、可复制危机和缺乏解释性等不同名称下报告。对AI失败的死后分析往往揭示出在模型开发或数据获取的早期阶段发生的错误。因此,我们要通过更多地注意最初的规划阶段,防止有害的模型失败,甚至破坏民主价值观。在本文件中,我们提出了一个支持AI解决方案规划的快速和简单的模型框架。 POCA框架可以基于四个支柱:绩效、 Opaquen、Receptional 危机和解释性缺乏解释性。对AI失败的死后分析往往揭示出在模型开发或数据获取的早期阶段发生的错误。因此,在建立模型之前,对AI 制定任何成本化的解决方案之前,它就能够制定任何成本化的模型和成本化的模型,从而确定AI 。它可以帮助为AI 制定任何成本化的解决方案。