It looks very attractive to coordinate racetrack-memory (RM) and stochastic-computing (SC) jointly to build an ultra-low power neuron-architecture.However, the above combination has always been questioned in a fatal weakness that the heavy valid-bits collection of RM-MTJ, a.k.a. accumulative parallel counters (APCs), cannot physically match the requirement for energy-efficient in-memory DNNs.Fortunately, a recently developed Transverse-Read (TR) provides a lightweight collection of valid-bits by detecting domain-wall resistance between a couple of MTJs on a single nanowire.In this work, we first propose a neuron-architecture that utilizes parallel TRs to build an ultra-fast valid-bits collection for SC, in which, a vector multiplication is successfully degraded as swift TRs.To solve huge storage for full stochastic sequences caused by the limited TR banks, a hybrid coding, pseudo-fractal compression, is designed to generate stochastic sequences by segments.To overcome the misalignment by the parallel early-termination, an asynchronous schedule of TR is further designed to regularize the vectorization, in which, the valid-bits from different lanes are merged in multiple RM-stacks for vector-level valid-bits collection.However, an inherent defect of TR, i.e., neighbor parts cannot be accessed simultaneously, could limit the throughput of the parallel vector multiplication, therefore, an interleaving data placement is used for full utilization of memory bus among different vectors.The results show that the SC-MAC assisted with TR achieves $2.88\times-4.40\times $speedup compared to CORUSCANT, at the same time, energy consumption is reduced by $1.26\times-1.42\times$.
翻译:暂无翻译