Voice-controlled systems are becoming ubiquitous in many IoT-specific applications such as home/industrial automation, automotive infotainment, and healthcare. While cloud-based voice services (\eg Alexa, Siri) can leverage high-performance computing servers, some use cases (\eg robotics, automotive infotainment) may require to execute the natural language processing (NLP) tasks offline, often on resource-constrained embedded devices. Large language models such as BERT and its variants are primarily developed with compute-heavy servers in mind. Despite the great performance of BERT models across various NLP tasks, their large size and numerous parameters pose substantial obstacles to offline computation on embedded systems. Lighter replacement of such language models (\eg DistilBERT and TinyBERT) often sacrifice accuracy, particularly for complex NLP tasks. Until now, it is still unclear \ca whether the state-of-the-art language models, \viz BERT and its variants are deployable on embedded systems with a limited processor, memory, and battery power and \cb if they do, what are the ``right'' set of configurations and parameters to choose for a given NLP task. This paper presents an \textit{exploratory study of modern language models} under different resource constraints and accuracy budgets to derive empirical observations about these resource/accuracy trade-offs. In particular, we study how the four most commonly used BERT-based language models (\eg BERT, RoBERTa, DistilBERT, and TinyBERT) perform on embedded systems. We tested them on a Raspberry Pi-based robotic platform with three hardware configurations and four datasets running various NLP tasks. Our findings can help designers to understand the deployability and performance of modern language models, especially those based on BERT architectures, thus saving a lot of time wasted in trial-and-error efforts.
翻译:暂无翻译