Random Forests (RFs) are widely used Machine Learning models in low-power embedded devices, due to their hardware friendly operation and high accuracy on practically relevant tasks. The accuracy of a RF often increases with the number of internal weak learners (decision trees), but at the cost of a proportional increase in inference latency and energy consumption. Such costs can be mitigated considering that, in most applications, inputs are not all equally difficult to classify. Therefore, a large RF is often necessary only for (few) hard inputs, and wasteful for easier ones. In this work, we propose an early-stopping mechanism for RFs, which terminates the inference as soon as a high-enough classification confidence is reached, reducing the number of weak learners executed for easy inputs. The early-stopping confidence threshold can be controlled at runtime, in order to favor either energy saving or accuracy. We apply our method to three different embedded classification tasks, on a single-core RISC-V microcontroller, achieving an energy reduction from 38% to more than 90% with a drop of less than 0.5% in accuracy. We also show that our approach outperforms previous adaptive ML methods for RFs.
翻译:随机森林(RFs)在低功率嵌入装置中被广泛使用机械学习模式,这是因为其硬件友好操作和对实际相关任务的高度精确性。一个RF的准确性往往随着内部学习能力薄弱者的数量(决定型树)的增加而增加,但代价是推推推延时间和能源消耗的成比例增加。考虑到在大多数应用中,投入并非都同样难以分类,这种成本是可以减轻的。因此,大型RF通常只对(few)硬投入需要,而对于较容易的输入则需要浪费。在这项工作中,我们建议为RFs建立一个早期停止机制,一旦获得高水平分类信心,即终止推断,减少为简单投入执行的弱学习者的数量。早期停止信任门槛可以在运行时得到控制,以便有利于节能或准确性。我们的方法适用于三种不同的嵌入式分类任务,即单一核心的RISC-V微控制器,实现能源从38%降至90%以上,准确性下降0.5 %。我们还表明我们的方法超越了以往的适应MRF方法。