State-of-the-art speech recognition systems rely on fixed, hand-crafted features such as mel-filterbanks to preprocess the waveform before the training pipeline. In this paper, we study end-to-end systems trained directly from the raw waveform, building on two alternatives for trainable replacements of mel-filterbanks that use a convolutional architecture. The first one is inspired by gammatone filterbanks (Hoshen et al., 2015; Sainath et al, 2015), and the second one by the scattering transform (Zeghidour et al., 2017). We propose two modifications to these architectures and systematically compare them to mel-filterbanks, on the Wall Street Journal dataset. The first modification is the addition of an instance normalization layer, which greatly improves on the gammatone-based trainable filterbanks and speeds up the training of the scattering-based filterbanks. The second one relates to the low-pass filter used in these approaches. These modifications consistently improve performances for both approaches, and remove the need for a careful initialization in scattering-based trainable filterbanks. In particular, we show a consistent improvement in word error rate of the trainable filterbanks relatively to comparable mel-filterbanks. It is the first time end-to-end models trained from the raw signal significantly outperform mel-filterbanks on a large vocabulary task under clean recording conditions.
翻译:最先进的语音识别系统依赖于固定的手工制作的功能,如Mel-filterbanks等,在培训管道前预处理波形。在本文中,我们研究直接从原始波形中培训的端对端系统,在两个可培训的替代方案的基础上,建立使用进化结构的Mel-filterbanks。第一个系统来自伽马通过滤库(Hohen等人,2015年;Sainath等人,2015年),第二个是分散式转换(Zeghidour等人,2017年)。我们建议对这些结构进行两次修改,并系统地将其与华尔街日志数据集中的Mel-filterbanks进行对比。第一个修改是添加一个实例正常化层,大大改进基于伽马通可培训的过滤库,加快对分散式过滤库的培训。第二个系统与这些方法中使用的低通度过滤器有关。这些修改不断改进了两种方法的性能,并消除了在大幅初始化的库中粗化的库中,一个在经过详细初始化的库中,一个经过清洁化的升级后,显示我们银行的升级的升级的系统。