Multicriteria decision-making methods exhibit critical dependence on the choice of normalization techniques, where different selections can alter 20-40% of the final rankings. Current practice is characterized by the ad-hoc selection of methods without systematic robustness evaluation. We present a framework that addresses this methodological uncertainty through automated exploration of the scaling transformation space. The implementation leverages the existing Scikit-Criteria infrastructure to automatically generate all possible methodological combinations and provide robust comparative analysis.
翻译:多准则决策方法对归一化技术的选择具有关键依赖性,不同选择可能导致最终排序结果产生20-40%的变动。当前实践特征表现为临时性方法选择而缺乏系统性鲁棒性评估。本文提出一种通过自动化探索尺度变换空间来解决此类方法不确定性的框架。该实现利用现有Scikit-Criteria基础设施自动生成所有可能的方法组合,并提供鲁棒的比较分析。