The Transformer architecture is widely used for machine translation tasks. However, its resource-intensive nature makes it challenging to implement on constrained embedded devices, particularly where available hardware resources can vary at run-time. We propose a dynamic machine translation model that scales the Transformer architecture based on the available resources at any particular time. The proposed approach, 'Dynamic-HAT', uses a HAT SuperTransformer as the backbone to search for SubTransformers with different accuracy-latency trade-offs at design time. The optimal SubTransformers are sampled from the SuperTransformer at run-time, depending on latency constraints. The Dynamic-HAT is tested on the Jetson Nano and the approach uses inherited SubTransformers sampled directly from the SuperTransformer with a switching time of <1s. Using inherited SubTransformers results in a BLEU score loss of <1.5% because the SubTransformer configuration is not retrained from scratch after sampling. However, to recover this loss in performance, the dimensions of the design space can be reduced to tailor it to a family of target hardware. The new reduced design space results in a BLEU score increase of approximately 1% for sub-optimal models from the original design space, with a wide range for performance scaling between 0.356s - 1.526s for the GPU and 2.9s - 7.31s for the CPU.
翻译:转换器架构被广泛用于机器翻译任务。 但是,其资源密集的性质使得在限制嵌入装置上实施限制嵌入装置具有挑战性, 特别是在现有硬件资源在运行时可以变化的情况下。 我们提议了一个动态机器翻译模型, 以任何特定时间可利用的资源为基础对变换器架构进行比例。 拟议的“ 动态HAT” 方法, “ 动态HAT ” 使用 HAT 超级转换器作为主干线, 以寻找在设计时使用精确度- 相对偏移取法不同的分数的子转换器。 但是, 最佳的子转换器是在运行时从超级转换器中抽取, 特别是当现有硬件在运行时, 特别是当现有硬件在杰特森纳诺进行测试时, 并且该方法使用从超级转换器直接抽取的子转换器结构。 使用继承的子转换器“ 动态转换器” 方法, “ 动态转换器” 使用 HAT 超级转换器作为主干线作为主干线, 寻找具有不同精度的分数<1.5 %, 因为次转换器配置在取样后没有重新训练, 。 然而, 设计空间空间的尺寸可缩小可以使其适应成一个目标硬件组合。 。 。 在BLEU 252x 3x 之间, 在原设计中, 252x 上将新的设计空间上, 255 5 5 升 升 。