The field of artificial intelligence (AI) has witnessed tremendous growth in recent years, however some of the most pressing challenges for the continued development of AI systems are the fundamental bandwidth, energy efficiency, and speed limitations faced by electronic computer architectures. There has been growing interest in using photonic processors for performing neural network inference operations, however these networks are currently trained using standard digital electronics. Here, we propose on-chip training of neural networks enabled by a CMOS-compatible silicon photonic architecture to harness the potential for massively parallel, efficient, and fast data operations. Our scheme employs the direct feedback alignment training algorithm, which trains neural networks using error feedback rather than error backpropagation, and can operate at speeds of trillions of multiply-accumulate (MAC) operations per second while consuming less than one picojoule per MAC operation. The photonic architecture exploits parallelized matrix-vector multiplications using arrays of microring resonators for processing multi-channel analog signals along single waveguide buses to calculate the gradient vector of each neural network layer in situ, which is the most computationally expensive operation performed during the backward pass. We also experimentally demonstrate training a deep neural network with the MNIST dataset using on-chip MAC operation results. Our novel approach for efficient, ultra-fast neural network training showcases photonics as a promising platform for executing AI applications.
翻译:近年来,人工智能领域(AI)出现了巨大的增长,然而,对继续开发人工智能系统来说,一些最紧迫的挑战是电子计算机结构所面临的基本带宽、能效和速度限制。人们越来越有兴趣使用光子处理器进行神经网络导火线操作,然而,这些网络目前使用标准数字电子设备进行培训。在这里,我们提议对神经网络进行芯片培训,由CMOS兼容的相兼容相像光谱结构提供能力,以利用大规模平行、高效和快速数据操作的潜力。我们的计划使用直接反馈调整培训算法,利用错误反馈而不是反向调整错误来培训神经网络,并且能够以数万亿倍倍累积操作的速度运作,同时每秒消耗不到一个微焦耳。这里,我们提议对神经网络进行同步培训,利用一系列的微光线再感应器处理单波控式导航客中多声波模拟信号,以计算每个地面神经网络层的梯度矢量矢量,这是我们进行最有逻辑性成本的运行方式。我们用一个最有弹性的超时空网络的操作结果。我们用一个最有计算成本的网络在进行。