As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI applications. In addition, we also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI. Together, these two fields give a more complete picture of the risks of contemporary AI. By focusing on system properties near accidents instead of seeking a root cause of accidents, we identify where attention should be paid to safety for current generation AI systems.
翻译:随着AI技术的容量和普遍性的提高,AI事故越来越普遍。 根据正常的事故理论、高可靠性理论和开放系统理论,我们建立了一个理解AI应用相关风险的框架。此外,我们还利用AI安全原则量化AI中增加智能和人性等品质的独特风险。 这两方面合在一起,更全面地反映了当代AI的风险。 通过关注近乎事故的系统特性,而不是寻找事故的根源,我们确定应关注当代AI系统的安全性。