Assessing the effectiveness of large language models (LLMs) in performing different tasks is crucial for understanding their strengths and weaknesses. This paper presents the Hierarchical Prompting Taxonomy (HPT), grounded on human cognitive principles and designed to assess LLMs by examining the cognitive demands of various tasks. The HPT uses the Hierarchical Prompting Framework (HPF), a prompt selection framework that organizes five distinct prompting strategies by their cognitive load on LLMs. This study introduces the Hierarchical Prompting Index (HPI) to measure task complexity, which demonstrates LLMs' abilities across different datasets and serves as a universal metric for task complexity. The HPT offers a reliable method for evaluating LLMs' problem-solving skills in diverse scenarios, leading to clearer conclusions. Extensive experiments with multiple datasets and LLMs show that the HPF enhances LLM performance by 2\% to 63\% compared to standard benchmark datasets, confirming the effectiveness of the HPT. To support future research in this domain, the implementations of HPT and HPF are publicly available
翻译:暂无翻译