Many optimization algorithm benchmarking platforms allow users to share their experimental data to promote reproducible and reusable research. However, different platforms use different data models and formats, which drastically complicates the identification of relevant datasets, their interpretation, and their interoperability. Therefore, a semantically rich, ontology-based, machine-readable data model that can be used by different platforms is highly desirable. In this paper, we report on the development of such an ontology, which we call OPTION (OPTImization algorithm benchmarking ONtology). Our ontology provides the vocabulary needed for semantic annotation of the core entities involved in the benchmarking process, such as algorithms, problems, and evaluation measures. It also provides means for automatic data integration, improved interoperability, and powerful querying capabilities, thereby increasing the value of the benchmarking data. We demonstrate the utility of OPTION, by annotating and querying a corpus of benchmark performance data from the BBOB collection of the COCO framework and from the Yet Another Black-Box Optimization Benchmark (YABBOB) family of the Nevergrad environment. In addition, we integrate features of the BBOB functional performance landscape into the OPTION knowledge base using publicly available datasets with exploratory landscape analysis. Finally, we integrate the OPTION knowledge base into the IOHprofiler environment and provide users with the ability to perform meta-analysis of performance data.
翻译:许多优化算法基准平台使用户能够分享实验数据,以促进可复制和可重复的研究;然而,不同的平台使用不同的数据模型和格式,使相关数据集的识别、其解释和互操作性大为复杂化,因此,非常可取的是,一个内容丰富的、基于本科学的、机器可读的数据模型,可供不同平台使用。我们在本文件中报告了开发这种本体学的情况,我们称之为OPPE(OPTIM 算法基准本科)。我们的本体学为参与基准进程的核心实体的语义说明提供了所需的词汇,例如算法、问题和评价措施等。它还为自动数据整合、改进互操作性和强大的查询能力提供了手段,从而提高了基准数据的价值。我们通过说明和查询从BBBOB收集COCFOR框架和从另一个黑人最佳算法基准(YABBOB)系统收集的基准性能数据集,我们利用OVA-OPB的功能性能分析基础,将功能性能特征融入了OVA数据库。