The engineering community has recently witnessed the emergence of chatbot technology with the release of OpenAI ChatGPT-4 and Google Bard. While these chatbots have been reported to perform well and even pass various standardized tests, including medical and law exams, this forum paper explores whether these chatbots can also pass the Fundamentals of Engineering (FE) and Principles and Practice of Engineering (PE) exams. A diverse range of civil and environmental engineering questions and scenarios are used to evaluate the chatbots' performance, as commonly present in the FE and PE exams. The chatbots' responses were analyzed based on their relevance, accuracy, and clarity and then compared against the recommendations of the National Council of Examiners for Engineering and Surveying (NCEES). Our report shows that ChatGPT-4 and Bard, respectively scored 70.9% and 39.2% in the FE exam and 46.2% and 41% in the PE exam. It is evident that the current version of ChatGPT-4 could potentially pass the FE exam. While future editions are much more likely to pass both exams, this study also highlights the potential of using chatbots as teaching assistants and guiding engineers.
翻译:最近,随着OpenAI ChatGPT-4和Google Bard的发布,工程界出现了聊天机器人技术。虽然这些聊天机器人已被报导表现良好,甚至通过了多个标准化考试,包括医疗和法律考试,但本论坛论文旨在探讨这些聊天机器人是否也能通过基础工程(FE)和原理与实践工程(PE)考试。使用各种民用和环境工程问题和场景来评估聊天机器人的表现,这些问题与FE和PE考试中通常出现的类似。聊天机器人的回应根据其相关性、准确性和清晰度进行分析,然后与美国工程及测量考核委员会(NCEES)的建议进行比较。我们的报告表明,ChatGPT-4和Bard在FE考试中分别得分70.9%和39.2%,在PE考试中得分46.2%和41%。当然,ChatGPT-4的未来版本则更有可能通过两个考试。这项研究还强调了使用聊天机器人作为教学助手和指导工程师的潜力。