In a typical Internet-of-Things setting that involves scientific applications, a target computation can be evaluated in many different ways depending on the split of computations among various devices. On the one hand, different implementations (or algorithms)--equivalent from a mathematical perspective--might exhibit significant difference in terms of performance. On the other hand, some of the implementations are likely to show similar performance characteristics. In this paper, we focus on analyzing the performance of a given set of algorithms by clustering them into performance classes. To this end, we use a measurement-based approach to evaluate and score algorithms based on pair-wise comparisons; we refer to this approach as"Relative performance analysis". Each comparison yields one of three outcomes: one algorithm can be "better", "worse", or "equivalent" to another; those algorithms evaluating to have equivalent performance are merged into the same performance class. We show that our clustering methodology facilitates algorithm selection with respect to more than one metric; for instance, from the subset of equivalently fast algorithms, one could then select an algorithm that consumes the least energy on a certain device.
翻译:在涉及科学应用的典型的互联网任务设置中,目标计算可以根据不同装置的计算方法进行多种不同的评价。一方面,从数学角度看,不同的执行(或算法)-等效(从数学角度看)-可能表现差异很大。另一方面,有些执行可能显示类似的性能特征。在本文中,我们侧重于分析一套特定算法的性能,将其组合成性能等级。为此,我们使用一种基于测量的方法来评估和评分基于对等比较的算法;我们将这种方法称为“弹性性能分析”。每种比较产生三种结果之一:一种算法可以“更好”、“worse”或“等效”与其他结果;那些对等性能进行评估的算法可以合并到相同的性能类别。我们显示,我们的组合方法可以促进一种以上的算法选择;例如,从等快算法的子中,可以选择一种在某种设备上消耗最少能量的算法。