Hyperdimensional Computing (HDC) is an emerging computational framework that mimics important brain functions by operating over high-dimensional vectors, called hypervectors (HVs). In-memory computing implementations of HDC are desirable since they can significantly reduce data transfer overheads. All existing in-memory HDC platforms consider binary HVs where each dimension is represented with a single bit. However, utilizing multi-bit HVs allows HDC to achieve acceptable accuracies in lower dimensions which in turn leads to higher energy efficiencies. Thus, we propose a highly accurate and efficient multi-bit in-memory HDC inference platform called MIMHD. MIMHD supports multi-bit operations using ferroelectric field-effect transistor (FeFET) crossbar arrays for multiply-and-add and FeFET multi-bit content-addressable memories for associative search. We also introduce a novel hardware-aware retraining framework (HWART) that trains the HDC model to learn to work with MIMHD. For six popular datasets and 4000 dimension HVs, MIMHD using 3-bit (2-bit) precision HVs achieves (i) average accuracies of 92.6% (88.9%) which is 8.5% (4.8%) higher than binary implementations; (ii) 84.1x (78.6x) energy improvement over a GPU, and (iii) 38.4x (34.3x) speedup over a GPU, respectively. The 3-bit $\times$ is 4.3x and 13x faster and more energy-efficient than binary HDC accelerators while achieving similar accuracies.
翻译:超超强计算(HDC)是一个新兴的计算框架,它通过运行高维矢量,即超动量体(HVs),模仿重要的大脑功能。HDC的模拟计算实施是可取的,因为它们可以显著减少数据传输间接费用。所有在模拟的HDC平台都考虑二进制 HV,其中每个维度都以一小步表示。然而,使用多比特HVs,使HDC能够在较低的维度中达到可接受的理解度,进而导致更高的能效。因此,我们建议建立一个高精确、高效的多位高模HDC感官(MIMHD)的模拟多位多位数感官(MIMHD)平台。MIMHD支持多位操作使用电机的外效中继器(FET)跨巴阵列,而FFET多维可读内容的记忆用于连系搜索。我们还引入了一个新的硬件认知再培训框架(HWART),用以培训HD(HDC模型)6个通用数据集质和4,000维特的HD(HD),4N3Vx速度值(HD)比高(HD),而MBs(HD)超过3-8.%(H)实现一个3-%(HD),而MHDULB)超过3-8.(HD%(HD),(HBx平均(HD),实现一个3-8.),(HB)超过3-8.(HB),(HB),实现一个3-8.(HB)一个3- b)。