我们讨论了在推理过程中使用的beam搜索，以及如何使用图Transformer网络（Graph Transformer Network）在训练时对该过程进行建模。图Transformer网络基本上是带有自动微分的加权有限状态自动机，它允许我们将先验编码到图中。有不同类型的加权有限状态和不同的操作，包括并集、克林闭包、交、合成和前向得分。损失函数通常是函数之间的区别。我们可以很容易地实现这些网络使用GTN库。
With recent advances in speech synthesis, synthetic data is becoming a viable alternative to real data for training speech recognition models. However, machine learning with synthetic data is not trivial due to the gap between the synthetic and the real data distributions. Synthetic datasets may contain artifacts that do not exist in real data such as structured noise, content errors, or unrealistic speaking styles. Moreover, the synthesis process may introduce a bias due to uneven sampling of the data manifold. We propose two novel techniques during training to mitigate the problems due to the distribution gap: (i) a rejection sampling algorithm and (ii) using separate batch normalization statistics for the real and the synthetic samples. We show that these methods significantly improve the training of speech recognition models using synthetic data. We evaluate the proposed approach on keyword detection and Automatic Speech Recognition (ASR) tasks, and observe up to 18% and 13% relative error reduction, respectively, compared to naively using the synthetic data.