Many platforms for benchmarking optimization algorithms offer users the possibility of sharing their experimental data with the purpose of promoting reproducible and reusable research. However, different platforms use different data models and formats, which drastically inhibits identification of relevant data sets, their interpretation, and their interoperability. Consequently, a semantically rich, ontology-based, machine-readable data model is highly desired. We report in this paper on the development of such an ontology, which we name OPTION (OPTImization algorithm benchmarking ONtology). Our ontology provides the vocabulary needed for semantic annotation of the core entities involved in the benchmarking process, such as algorithms, problems, and evaluation measures. It also provides means for automated data integration, improved interoperability, powerful querying capabilities and reasoning, thereby enriching the value of the benchmark data. We demonstrate the utility of OPTION by annotating and querying a corpus of benchmark performance data from the BBOB workshop data - a use case which can be easily extended to cover other benchmarking data collections.
翻译:许多衡量优化算法基准的平台为用户提供了分享实验数据的可能性,目的是促进可复制和可重复的研究,然而,不同的平台使用不同的数据模型和格式,这严重妨碍了相关数据集的识别、解释和互操作性。因此,非常希望有一个内容丰富的、基于本科学的、机器可读的数据模型。我们在本文件中报告开发这种本体学的情况,我们称之为“OPtion”(OPTIPE) 。我们的本体学提供了参与基准制定过程的核心实体的语义识别所需的词汇,例如算法、问题和评估措施。它还提供了自动化数据整合、改进互操作性、强大的查询能力和推理手段,从而丰富基准数据的价值。我们通过从BBOB讲习班数据中说明和查询一系列基准性能数据,展示了《设想》的效用。我们很容易扩大这一用途,以涵盖其他基准数据收集。