Advancement in Processor technology has made it easy to handle data-intensive workloads, but limiting main memory advances has created performance bottlenecks. In DRAM, there have been improvements in DRAM access latency as well as reduction in cost-per-bit with the increase in cell density. But still DRAM data transfer rate lags behind the processing speed of the current generation processors. As Memory advancements based on hardware have been progressing at a slower pace, to cope up with High-end Processors, Architectural level advancements such as Prediction techniques, Replacement policies, etc are the major subject. In the recent field of research, Data prediction is a sought out topic as correct prediction can boost performance by decreasing the amount of excess memory access by predicting data beforehand using data access trends and behaviors. Though prediction techniques have been implemented at most of the Computer Architecture, We propose implementing data prediction in DRAM level architectures like TL-DRAM and CROW. Both of these method distributes the DRAM into different parts which contain a smaller section which is faster and larger section which contains the bulk of data but is comparatively slower. We wish to use data prediction in between these sections of memory to have predicted data transferred to the faster sections to improve the overall performance by reducing the memory access time.
翻译:处理器技术的进步使得处理数据密集型工作量变得容易,但限制主要记忆进步却造成了工作瓶颈。在数据记录和档案管理领域,数据记录和记录存存取的延迟时间有所改进,随着细胞密度的增加,成本每位的提高也有所降低。但是,数据记录和记录管理数据传输率仍然落后于当前生成处理器的处理速度。由于基于硬件的记忆进步速度较慢,以便应对高端处理器、建筑级进步(如预测技术、替换政策等)成为主要主题。在最近的研究领域,数据预测是一个探索的主题,因为正确的预测可以通过使用数据访问趋势和行为提前预测数据来减少过多的存储存取量,从而提高业绩。尽管大多数计算机架构已经实施了预测技术,但我们提议在诸如TL-DRAM和CROW等D级结构中实施数据预测。这两种方法都将DRAM分成一个较小部分,其中含有大量数据的部分较快和较大部分,但相对来说比较慢。我们希望利用这些数据预测来缩短这些部分之间的数据存储状况,从而缩短了这些部分之间的数据访问时间预测。