Reducing the computational time to process large data sets in Data Envelopment Analysis (DEA) is the objective of many studies. Contributions include fundamentally innovative procedures, new or improved preprocessors, and hybridization between - and among - all these. Ultimately, new contributions are made when the number and size of the LPs solved is somehow reduced. This paper provides a comprehensive analysis and comparison of two competing procedures to process DEA data sets: BuildHull and Enhanced Hierarchical Decomposition (EHD). A common ground for comparison is made by examining their sequential implementations, applying to both the same preprocessors - when permitted - on a suite of data sets widely employed in the computational DEA literature. In addition to reporting on execution time, we discuss how the data characteristics affect performance and we introduce using the number and size of the LPs solved to better understand performances and explain differences. Our experiments show that the dominance of BuildHull can be substantial in large-scale and high-density datasets. Comparing and explaining performance based on the number and size of LPS lays the groundwork for a comparison of the parallel implementations of procedures BuildHull and EHD.
翻译:暂无翻译