Artificial Intelligence (AI) has revolutionized software development, particularly by automating repetitive tasks and improving developer productivity. While these advancements are well-documented, the use of AI-powered tools for Software Vulnerability Management (SVM), such as vulnerability detection and repair, remains underexplored in industry settings. To bridge this gap, our study aims to determine the extent of the adoption of AI-powered tools for SVM, identify barriers and facilitators to the use, and gather insights to help improve the tools to meet industry needs better. We conducted a survey study involving 60 practitioners from diverse industry sectors across 27 countries. The survey incorporates both quantitative and qualitative questions to analyze the adoption trends, assess tool strengths, identify practical challenges, and uncover opportunities for improvement. Our findings indicate that AI-powered tools are used throughout the SVM life cycle, with 69% of users reporting satisfaction with their current use. Practitioners value these tools for their speed, coverage, and accessibility. However, concerns about false positives, missing context, and trust issues remain prevalent. We observe a socio-technical adoption pattern in which AI outputs are filtered through human oversight and organizational governance. To support safe and effective use of AI for SVM, we recommend improvements in explainability, contextual awareness, integration workflows, and validation practices. We assert that these findings can offer practical guidance for practitioners, tool developers, and researchers seeking to enhance secure software development through the use of AI.
翻译:人工智能(AI)已彻底变革软件开发,尤其通过自动化重复性任务和提升开发者生产力。尽管这些进展已有充分记载,但在工业环境中,将AI驱动的工具用于软件漏洞管理(SVM)(如漏洞检测与修复)仍缺乏深入探索。为弥合这一差距,本研究旨在确定AI驱动工具在SVM中的采用程度,识别使用过程中的障碍与促进因素,并收集见解以帮助改进工具,从而更好地满足行业需求。我们开展了一项涉及来自27个国家不同行业领域的60名从业者的调查研究。该调查结合定量与定性问题,以分析采用趋势、评估工具优势、识别实际挑战并揭示改进机会。我们的研究结果表明,AI驱动工具被用于SVM全生命周期,69%的用户对其当前使用表示满意。从业者重视这些工具的速度、覆盖范围和可访问性。然而,对误报、上下文缺失和信任问题的担忧仍然普遍存在。我们观察到一种社会技术采用模式,即AI输出需经过人工监督和组织治理的筛选。为支持AI在SVM中的安全有效使用,我们建议在可解释性、上下文感知、集成工作流和验证实践方面进行改进。我们主张这些发现可为从业者、工具开发者和研究人员提供实用指导,以通过AI应用增强安全软件开发。