Advances in artificial intelligence need to become more resource-aware and sustainable. This requires clear assessment and reporting of energy efficiency trade-offs, like sacrificing fast running time for higher predictive performance. While first methods for investigating efficiency have been proposed, we still lack comprehensive results for popular methods and data sets. In this work, we attempt to fill this information gap by providing empiric insights for popular AI benchmarks, with a total of 100 experiments. Our findings are evidence of how different data sets all have their own efficiency landscape, and show that methods can be more or less likely to act efficiently.
翻译:针对人工智能的发展需要更具资源意识和可持续性。这需要清晰评估和报告能源效率的权衡,例如牺牲快速运行时间以获得更高的预测性能。尽管已经提出了研究效率的第一批方法,但我们仍然缺乏针对流行方法和数据集的全面结果。在这项工作中,我们试图通过提供总共100个实验的流行AI基准测试的经验性见解来填补这一信息差距。我们的研究结果表明,不同数据集都有自己的效率取向,并且表明方法可以更或者更不可能行为有效。