[Devlin et al.2019] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. Proceedings of NAACL.
[Lou and Johnson2017] Paria Jamshid Lou and Mark Johnson. 2017. Disfluency detection using a noisy channel model and a deep neural language model. Proceedings of ACL.
[Wang et al.2017] Shaolei Wang, Wanxiang Che, Yue Zhang, Meishan Zhang, and Ting Liu. 2017. Transition-based disfluency detection using lstms. In Proceedings of EMNLP, pages 2785–2794.
[Wu et al.2015] Shuangzhi Wu, Dongdong Zhang, Ming Zhou, and Tiejun Zhao. 2015. Efficient disfluency detection with transition-based parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computa- tional Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 495–503. Association for Computational Linguistics.
[Zayats et al.2016] Vicky Zayats, Mari Ostendorf, and Hannaneh Hajishirzi. 2016. Disfluency detection using a bidirectional lstm. arXiv preprint arXiv:1604.03209.
[Wang et al.2020] Shaolei Wang, Wanxiang Che, Qi Liu, Pengda Qin, Ting Liu, and William Yang Wang. 2019. Multi-task self-supervised learning for disfluency detection. arXiv preprint arXiv:1908.05378.
[Bach and Huang, 2019] Nguyen Bach and Fei Huang. 2019. Noisy bilstm-based models for disfluency detection. Proc. Interspeech 2019, pages 4230–4234.
[Lou and Johnson, 2020]] Paria Jamshid Lou and Mark Johnson. 2020. Improving disfluency detection by self-training a self-attentive model. arXiv, pages arXiv–2004.
本期责任编辑:丁 效本期编辑:彭 湃