Inspired by the way human brain works, the emerging hyperdimensional computing (HDC) is getting more and more attention. HDC is an emerging computing scheme based on the working mechanism of brain that computes with deep and abstract patterns of neural activity instead of actual numbers. Compared with traditional ML algorithms such as DNN, HDC is more memory-centric, granting it advantages such as relatively smaller model size, less computation cost, and one-shot learning, making it a promising candidate in low-cost computing platforms. However, the robustness of HDC models have not been systematically studied. In this paper, we systematically expose the unexpected or incorrect behaviors of HDC models by developing HDXplore, a blackbox differential testing-based framework. We leverage multiple HDC models with similar functionality as cross-referencing oracles to avoid manual checking or labeling the original input. We also propose different perturbation mechanisms in HDXplore. HDXplore automatically finds thousands of incorrect corner case behaviors of the HDC model. We propose two retraining mechanisms and using the corner cases generated by HDXplore to retrain the HDC model, we can improve the model accuracy by up to 9%.
翻译:在人类大脑工作方式的启发下,新兴的超维计算(HDC)越来越受到越来越多的关注。HDC是一个基于大脑工作机制的新兴计算机制,它以深而抽象的神经活动模式而不是实际数字进行计算。与传统的ML算法,如DNN相比,HDC更以记忆为中心,它具有较大的优势,例如模型规模较小,计算成本较低,以及一发学习,使它在低成本计算平台中成为有希望的候选对象。然而,HDC模型的稳健性没有得到系统的研究。在本文中,我们通过开发黑盒差异测试框架HDXplore,系统地暴露HDC模型的意外或不正确行为。我们利用多个具有类似功能的HDC模型,例如交叉参照或触角,以避免人工检查或标记原始输入。我们还在HDXplore提出了不同的扰动机制。HDXplore自动发现数千个HDC模型的不正确的角落行为。我们建议建立两个再培训机制,并利用HDXplore生成的角落案例来将HDC的模型升级为9DC的精确性。