It looks very attractive to coordinate racetrack-memory(RM) and stochastic-computing (SC) jointly to build an ultra-low power neuron-architecture.However,the above combination has always been questioned in a fatal weakness that the narrow bit-view of the RM-MTJ structure,a.k.a.shift-and-access pattern,cannot physically match the great throughput of direct-stored stochastic sequences.Fortunately,a recently developed Transverse-Read(TR) provides a wider segment-view to RM via detecting the resistance of domain-walls between a couple of MTJs on single nanowire,therefore RM can be enhanced with a faster access to the sequences without any substantial domain-shift.To utilize TR for a power-efficient SC-DNNs, in this work, we propose a segment-based compression to leverage one-cycle TR to only read those kernel segments of stochastic sequences,meanwhile,remove a large number of redundant segments for ultra-high storage density.In decompression stage,the low-discrepancy stochastic sequences can be quickly reassembled by a select-and-output loop using kernel segments rather than slowly regenerated by costly SNGs.Since TR can provide an ideal in-memory acceleration in one-counting, counter-free SC-MACs are designed and deployed near RMs to form a power-efficient neuron-architecture,in which,the binary results of TR are activated straightforward without sluggish APCs.The results show that under the TR aided RM model,the power efficiency,speed,and stochastic accuracy of Seed-based Fast Stochastic Computing significantly enhance the performance of DNNs.The speed of computation is 2.88x faster in Lenet-5 and 4.40x faster in VGG-19 compared to the CORUSCANT model.The integration of TR with RTM is deployed near the memory to create a power-efficient neuron architecture, eliminating the need for slow Accumulative Parallel Counters (APCs) and improving access speed to stochastic sequences.
翻译:暂无翻译