This research paper explores the potential of Large Language Models (LLMs) to enhance speaking skills. We first present a novel LLM-based system, Comuniqa, for this task. We then take a humancentric approach to evaluate this system, comparing it with human experts. We also investigate the possibility of combining feedback from both LLM and human experts to enhance overall learning outcomes. We use purposive and random sampling for recruiting participants, categorizing them into three groups: those who use LLM-enabled apps for improving speaking skills, those guided by human experts for the same task and those who utilize both the LLM-enabled apps as well as the human experts. Using surveys, interviews, and actual study sessions, we provide a detailed perspective on the effectiveness of different learning modalities. Our preliminary findings suggest that while LLM-based systems have commendable accuracy, they lack human-level cognitive capabilities, both in terms of accuracy and empathy.
翻译:暂无翻译