Large Language Models (LLMs), such as GPT-3, have demonstrated remarkable natural language processing and generation capabilities and have been applied to a variety tasks, such as source code generation. This paper explores the potential of integrating LLMs in the hazard analysis for safety-critical systems, a process which we refer to as co-hazard analysis (CoHA). In CoHA, a human analyst interacts with an LLM via a context-aware chat session and uses the responses to support elicitation of possible hazard causes. In this experiment, we explore CoHA with three increasingly complex versions of a simple system, using Open AI's ChatGPT service. The quality of ChatGPT's responses were systematically assessed to determine the feasibility of CoHA given the current state of LLM technology. The results suggest that LLMs may be useful for supporting human analysts performing hazard analysis.
翻译:大型语言模型(LLMs),如GPT-3,展现了卓越的自然语言处理和生成能力,并已应用于各种任务,例如源代码生成。本文探讨了将LLMs整合到安全关键系统的危害分析中的潜力,我们将此过程称为合作危害分析(CoHA)。在CoHA中,人类分析师通过上下文感知的聊天会话与LLMs进行交互,并使用其响应支持可能的危害原因的引出。在此实验中,我们使用Open AI的ChatGPT服务,探索了使用三个越来越复杂版本的简单系统的CoHA。系统地评估了ChatGPT响应的质量,以确定在当前LLM技术水平下CoHA的可行性。结果表明,LLMs可能有助于支持人类分析师进行危害分析。