The democratization of generative AI introduces new forms of human-AI interaction and raises urgent safety, ethical, and cybersecurity concerns. We develop a socio-technical explanation for how generative AI enables and scales cybercrime. Drawing on affordance theory and technological amplification, we argue that generative AI systems create new action possibilities for cybercriminals and magnify pre-existing malicious intent by lowering expertise barriers and increasing attack efficiency. To illustrate this framework, we conduct interrupted time series analyses of two large datasets: (1) 464,190,074 malicious IP address reports from AbuseIPDB, and (2) 281,115 cryptocurrency scam reports from Chainabuse. Using November 30, 2022, as a high-salience public-access shock, we estimate the counterfactual trajectory of reported cyber abuse absent the release, providing an early-warning impact assessment of a general-purpose AI technology. Across both datasets, we observe statistically significant post-intervention increases in reported malicious activity, including an immediate increase of over 1.12 million weekly malicious IP reports and about 722 weekly cryptocurrency scam reports, with sustained growth in the latter. We discuss implications for AI governance, platform-level regulation, and cyber resilience, emphasizing the need for multi-layer socio-technical strategies that help key stakeholders maximize AI's benefits while mitigating its growing cybercrime risks.
翻译:生成式人工智能的普及化引入了新的人机交互形式,并引发了紧迫的安全、伦理与网络安全问题。本文提出一种社会技术性解释,阐明生成式人工智能如何赋能并规模化网络犯罪。基于可供性理论与技术放大效应,我们认为生成式人工智能系统通过降低技术门槛与提升攻击效率,为网络犯罪分子创造了新的行动可能性,并放大了既有的恶意意图。为验证该框架,我们对两个大型数据集进行了间断时间序列分析:(1)来自AbuseIPDB的464,190,074条恶意IP地址报告;(2)来自Chainabuse的281,115份加密货币诈骗报告。以2022年11月30日作为高显著性公共访问冲击事件,我们估算了未发布该技术情况下网络滥用报告的反事实轨迹,从而对通用人工智能技术进行了预警性影响评估。在两个数据集中,我们观察到干预后报告恶意活动的统计显著增长:每周恶意IP报告量即时增加超过112万条,加密货币诈骗报告每周增加约722例,且后者呈现持续增长态势。本文讨论了其对人工智能治理、平台级监管及网络韧性的启示,强调需要构建多层社会技术策略,以帮助关键利益相关者在最大化人工智能效益的同时,缓解其日益增长的网络犯罪风险。