自然语言处理学术速递[2021.10.19]

自然语言处理学术速递[2021.10.19]

访问arxivdaily.com获取含摘要速递,更有收藏、搜索等功能,涵盖CS|物理|数学|经济|统计|金融|生物|电气领域
同步公众号:arXiv每日学术速递,欢迎关注

cs.CL 方向,今日共计92篇


Transformer(7篇)

【1】 NormFormer: Improved Transformer Pretraining with Extra Normalization
标题:NormFormer:改进的Transformer预训练和额外归一化
链接arxiv.org/abs/2110.0945
作者:Sam Shleifer,Jason Weston,Myle Ott
机构:Facebook AI Research∗

【2】 SentimentArcs: A Novel Method for Self-Supervised Sentiment Analysis of Time Series Shows SOTA Transformers Can Struggle Finding Narrative Arcs
标题:SentimentArcs:一种时间序列自监督情感分析的新方法显示SOTATransformer很难找到叙事弧
链接arxiv.org/abs/2110.0945
作者:Jon Chun
机构:Digital Humanities Colab, Integrated Program for Humane Studies, Kenyon College, Gambier, OH
备注:87 pages, 97 figures

【3】 Contextual Hate Speech Detection in Code Mixed Text using Transformer Based Approaches
标题:使用基于转换器的方法检测代码混合文本中的上下文仇恨语音
链接arxiv.org/abs/2110.0933
作者:Ravindra Nayak,Raviraj Joshi
机构:Sri Jayachamarajendra College of Engineering, Mysore, Indian Institute of Technology Madras, Chennai
备注:Accepted at HASOC @Forum for Information Retrieval Evaluation(FIRE) 2021

【4】 Deep Transfer Learning & Beyond: Transformer Language Models in Information Systems Research
标题:深度迁移学习与超越:信息系统研究中的Transformer语言模型
链接arxiv.org/abs/2110.0897
作者:Ross Gruetzemacher,David Paradice
机构: Transformer Language Models in Information Systems Research Deep Transfer Learning & Beyond Transformer Language Models in Information Systems Research Ross Gruetzemacher Wichita State University, Frank Barton School of Business
备注:Under review (revised once). Section 2, the literature review on deep transfer learning and transformer language models, is a valuable introduction for a broad audience (not just information systems researchers)

【5】 Transformer with a Mixture of Gaussian Keys
标题:混合使用高斯密钥的Transformer
链接arxiv.org/abs/2110.0867
作者:Tam Nguyen,Tan M. Nguyen,Dung Le,Khuong Nguyen,Anh Tran,Richard G. Baraniuk,Nhat Ho,Stanley J. Osher
机构:FPT Software, Vietnam†, University of California, Los Angeles, USA‡, Rice University, Houston, USA⋄, University of Texas, Austin, USA◦
备注:21 pages, 8 figures, 4 tables

【6】 On Learning the Transformer Kernel
标题:关于学习Transformer核心的几点思考
链接arxiv.org/abs/2110.0832
作者:Sankalan Pal Chowdhury,Adamos Solomou,Avinava Dubey,Mrinmaya Sachan
机构:Department of Computer Science, ETH Z¨urich, Google Research, Mountain View, CA
备注:26 pages, of which 11 form the appendix. 6 figures of which 2 are part of appendix

【7】 From Multimodal to Unimodal Attention in Transformers using Knowledge Distillation
标题:基于知识蒸馏的Transformer从多模态注意到单模态注意
链接arxiv.org/abs/2110.0827
作者:Dhruv Agarwal,Tanay Agrawal,Laura M. Ferrari,François Bremond
机构:INRIA Sophia Antipolis - M´editerran´ee, France, Indian Institute of Information Technology, Allahabad, India, Universit´e Cˆote d’Azur, France
备注:Preprint. Final paper accepted at the 17th IEEE International Conference on Advanced Video and Signal-based Surveillance, AVSS 2021, Virtual, November 16-19, 2021. 8 pages

QA|VQA|问答|对话(2篇)

【1】 COVIDRead: A Large-scale Question Answering Dataset on COVID-19
标题:CoVIDRead:一个关于冠状病毒的大规模问答数据集
链接arxiv.org/abs/2110.0932
作者:Tanik Saikh,Sovan Kumar Sahoo,Asif Ekbal,Pushpak Bhattacharyya
机构:Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, Patna, India, Indian Institute of Technology Bombay, Mumbai, Maharashtra, India
备注:20 pages, 7 figures

【2】 Open Domain Question Answering over Virtual Documents: A Unified Approach for Data and Text
标题:虚拟文档上的开放领域问答:一种数据和文本的统一方法
链接arxiv.org/abs/2110.0841
作者:Kaixin Ma,Hao Cheng,Xiaodong Liu,Eric Nyberg,Jianfeng Gao
机构:♣ Carnegie Mellon University ♠ Microsoft Research

机器翻译(1篇)

【1】 Towards Making the Most of Multilingual Pretraining for Zero-Shot Neural Machine Translation
标题:充分发挥零射频神经机器翻译的多语种预训练作用
链接arxiv.org/abs/2110.0854
作者:Guanhua Chen,Shuming Ma,Yun Chen,Dongdong Zhang,Jia Pan,Wenping Wang,Furu Wei
机构:The University of Hong Kong; ,Microsoft Research, Shanghai University of Finance and Economics; ,Texas A&M University
备注:Preprint

语义分析(7篇)

【1】 Ensembling Graph Predictions for AMR Parsing
标题:AMR分析中的集成图预测
链接arxiv.org/abs/2110.0913
作者:Hoang Thanh Lam,Gabriele Picco,Yufang Hou,Young-Suk Lee,Lam M. Nguyen,Dzung T. Phan,Vanessa López,Ramon Fernandez Astudillo
机构: IBM Research, Dublin, Ireland, IBM Research, Thomas J. Watson Research Center, Yorktown Heights, USA
备注:Accepted at NeurIPS 2021

【2】 Substructure Distribution Projection for Zero-Shot Cross-Lingual Dependency Parsing
标题:零命中率跨语言依存句法分析的子结构分布投影
链接arxiv.org/abs/2110.0853
作者:Haoyue Shi,Kevin Gimpel,Karen Livescu
机构:Toyota Technological Institute at Chicago, S Kenwood Ave, Chicago, IL, USA

【3】 The Power of Prompt Tuning for Low-Resource Semantic Parsing
标题:低资源语义分析的即时调优能力
链接arxiv.org/abs/2110.0852
作者:Nathan Schucher,Siva Reddy,Harm de Vries
机构:ElementAI, a ServiceNow company, MilaMcGill University, Facebook CIFAR AI Chair

【4】 Controllable Semantic Parsing via Retrieval Augmentation
标题:基于检索增强的可控语义分析
链接arxiv.org/abs/2110.0845
作者:Panupong Pasupat,Yuan Zhang,Kelvin Guu
机构:Google Research
备注:EMNLP 2021

【5】 Multilingual unsupervised sequence segmentation transfers to extremely low-resource languages
标题:多语言无监督序列分割转移到资源极低的语言
链接arxiv.org/abs/2110.0841
作者:C. M. Downey,Shannon Drizin,Levon Haroutunian,Shivin Thukral
机构:Department of Linguistics, University of Washington

【6】 On The Ingredients of an Effective Zero-shot Semantic Parser
标题:一种有效的零命中语义解析器的构成要素
链接arxiv.org/abs/2110.0838
作者:Pengcheng Yin,John Wieting,Avirup Sil,Graham Neubig
机构:Avi Sil♦, ♠Carnegie Mellon University, ♣Google Research, ♦IBM Research

【7】 Towards Transparent Interactive Semantic Parsing via Step-by-Step Correction
标题:基于分步纠错的透明交互式语义分析
链接arxiv.org/abs/2110.0834
作者:Lingbo Mo,Ashley Lewis,Huan Sun,Michael White
机构:The Ohio State University

Graph|知识图谱|Knowledge(7篇)

【1】 HRKD: Hierarchical Relational Knowledge Distillation for Cross-domain Language Model Compression
标题:HRKD:面向跨域语言模型压缩的层次关系知识抽取
链接arxiv.org/abs/2110.0855
作者:Chenhe Dong,Yaliang Li,Ying Shen,Minghui Qiu
机构: Sun Yat-sen University , Alibaba Group
备注:EMNLP 2021

【2】 Think Before You Speak: Using Self-talk to Generate Implicit Commonsense Knowledge for Response Generation
标题:三思而后行:使用自言自语为响应生成隐含的常识知识
链接arxiv.org/abs/2110.0850
作者:Pei Zhou,Karthik Gopalakrishnan,Behnam Hedayatnia,Seokhwan Kim,Jay Pujara,Xiang Ren,Yang Liu,Dilek Hakkani-Tur
机构: Department of Computer Science, University of Southern California, Amazon Alexa AI
备注:13 pages, 2 figures, 7 tables

【3】 Understanding Procedural Knowledge by Sequencing Multimodal Instructional Manuals
标题:通过对多模态教学手册排序来理解过程性知识
链接arxiv.org/abs/2110.0848
作者:Te-Lin Wu,Alex Spangher,Pegah Alipoormolabashi,Marjorie Freedman,Ralph Weischedel,Nanyun Peng
机构:University of California, Los Angeles,Information Sciences Institute, University of Southern California, Sharif University of Technology

【4】 Leveraging Knowledge in Multilingual Commonsense Reasoning
标题:在多语言常识推理中利用知识
链接arxiv.org/abs/2110.0846
作者:Yuwei Fang,Shuohang Wang,Yichong Xu,Ruochen Xu,Siqi Sun,Chenguang Zhu,Michael Zeng
机构:Microsoft Cognitive Services Research Group
备注:First place in XCSR Leaderboard: this https URL Work in progress

【5】 Knowledge Enhanced Pretrained Language Models: A Compreshensive Survey
标题:知识增强型预训练语言模型研究综述
链接arxiv.org/abs/2110.0845
作者:Xiaokai Wei,Shen Wang,Dejiao Zhang,Parminder Bhatia,Andrew Arnold

【6】 Prix-LM: Pretraining for Multilingual Knowledge Base Construction
标题:PRIX-LM:多语种知识库建设的前期训练
链接arxiv.org/abs/2110.0844
作者:Wenxuan Zhou,Fangyu Liu,Ivan Vulić,Nigel Collier,Muhao Chen
机构:LUKA Lab, CSD & ISI, University of Southern California, USA, Language Technology Lab, TAL, University of Cambridge, UK

【7】 Generated Knowledge Prompting for Commonsense Reasoning
标题:用于常识推理的生成知识提示
链接arxiv.org/abs/2110.0838
作者:Jiacheng Liu,Alisa Liu,Ximing Lu,Sean Welleck,Peter West,Ronan Le Bras,Yejin Choi,Hannaneh Hajishirzi
机构:♥Paul G. Allen School of Computer Science & Engineering, University of Washington, ♠Allen Institute for Artificial Intelligence

摘要|信息提取(4篇)

【1】 Fine-Grained Opinion Summarization with Minimal Supervision
标题:基于最小监督的细粒度意见摘要
链接arxiv.org/abs/2110.0884
作者:Suyu Ge,Jiaxin Huang,Yu Meng,Sharon Wang,Jiawei Han
机构:University of Illinois Urbana-Champaign

【2】 PRIMER: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization
标题:Primer:基于金字塔的多文档文摘掩蔽句预训练
链接arxiv.org/abs/2110.0849
作者:Wen Xiao,Iz Beltagy,Giuseppe Carenini,Arman Cohan
机构:†University of British Columbia, Vancouver, Canada, ‡Allen Institute for AI, Seattle, WA, USA, §Paul G. Allen School of Computer Science & Engineering, University of Washington

【3】 Training Dynamics for Text Summarization Models
标题:文本摘要模型的训练动态
链接arxiv.org/abs/2110.0837
作者:Tanya Goyal,Jiacheng Xu,Junyi Jessy Li,Greg Durrett
机构: Department of Computer Science, Department of Linguistics, The University of Texas at Austin
备注:preprint

【4】 Aspect-Oriented Summarization through Query-Focused Extraction
标题:通过面向查询的抽取实现面向方面的摘要
链接arxiv.org/abs/2110.0829
作者:Ojas Ahuja,Jiacheng Xu,Akshay Gupta,Kevin Horecka,Greg Durrett
机构:The University of Texas at Austin, Walmart NexTech

推理|分析|理解|解释(4篇)

【1】 Analysis of French Phonetic Idiosyncrasies for Accent Recognition
标题:法语重音识别的语音特点分析
链接arxiv.org/abs/2110.0917
作者:Pierre Berjon,Avishek Nag,Soumyabrata Dev
机构:Department de Sciences du Numérique, INP-ENSEEIHT, Toulouse, France, School of Electrical and Electronic Engineering, University College Dublin, Ireland, ADAPT SFI Research Centre, Dublin, Ireland, School of Computer Science, University College Dublin, Ireland
备注:Accepted in Soft Computing Letters, 2021

【2】 MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding
标题:MarkupLM:文本和标记语言的预训练,用于视觉丰富的文档理解
链接arxiv.org/abs/2110.0851
作者:Junlong Li,Yiheng Xu,Lei Cui,Furu Wei
机构:Shanghai Jiao Tong Univiersity, Microsoft Research Asia
备注:Work in Progress

【3】 Case-based Reasoning for Better Generalization in Text-Adventure Games
标题:基于案例推理的文本冒险游戏中更好的泛化
链接arxiv.org/abs/2110.0847
作者:Mattia Atzeni,Shehzaad Dhuliawala,Keerthiram Murugesan,Mrinmaya Sachan
机构:IBM Research, EPFL, ETH Zürich

【4】 Unsupervised Natural Language Inference Using PHL Triplet Generation
标题:基于PHL三元组生成的无监督自然语言推理
链接arxiv.org/abs/2110.0843
作者:Neeraj Varshney,Pratyay Banerjee,Tejas Gokhale,Chitta Baral
机构:Arizona State University
备注:9 pages, 2 figures, 8 tables

GAN|对抗|攻击|生成相关(9篇)

【1】 Protecting Anonymous Speech: A Generative Adversarial Network Methodology for Removing Stylistic Indicators in Text
标题:匿名言论保护:一种去除文本中文体指标的生成性对抗性网络方法
链接arxiv.org/abs/2110.0949
作者:Rishi Balakrishnan,Stephen Sloan,Anil Aswani
机构:University of California, Berkeley

【2】 Don't Judge Me by My Face : An Indirect Adversarial Approach to Remove Sensitive Information From Multimodal Neural Representation in Asynchronous Job Video Interviews
标题:不要以貌取人:异步工作视频面试中从多模态神经表征中去除敏感信息的间接对抗性方法
链接arxiv.org/abs/2110.0942
作者:Léo Hemamou,Arthur Guillon,Jean-Claude Martin,Chloé Clavel
机构:∗EASYRECRUE, Paris, France, †LIMSI-LISN, CNRS, Paris-Sud University, Paris-Saclay University F-, Orsay, France, ‡T´el´ecom-Paris, IP-Paris, F-, Paris, France
备注:published in ACII 2021

【3】 BEAMetrics: A Benchmark for Language Generation Evaluation Evaluation
标题:BEAMetrics:一种语言生成评价的基准
链接arxiv.org/abs/2110.0914
作者:Thomas Scialom,Felix Hill
机构:Sorbonne Université, CNRS, LIP, F-, reciTAL, Paris, France, DeepMind

【4】 FrugalScore: Learning Cheaper, Lighter and Faster Evaluation Metricsfor Automatic Text Generation
标题:FrugalScore:学习更便宜、更轻、更快的文本自动生成评估指标
链接arxiv.org/abs/2110.0855
作者:Moussa Kamal Eddine,Guokan Shang,Antoine J. -P. Tixier,Michalis Vazirgiannis
机构:École Polytechnique,Linagora

【5】 Multimodal Dialogue Response Generation
标题:多模态对话响应生成
链接arxiv.org/abs/2110.0851
作者:Qingfeng Sun,Yujing Wang,Can Xu,Kai Zheng,Yaming Yang,Huang Hu,Fei Xu,Jessica Zhang,Xiubo Geng,Daxin Jiang
机构:Microsoft STC Aisa, Microsoft Research Asia
备注:This paper has been submitted before 15th October @ 11:59pm AOE(UTC -12)

【6】 Analyzing Dynamic Adversarial Training Data in the Limit
标题:极限条件下的动态对抗性训练数据分析
链接arxiv.org/abs/2110.0851
作者:Eric Wallace,Adina Williams,Robin Jia,Douwe Kiela
机构:UC Berkeley, Facebook AI Research, USC

【7】 Improving Compositional Generalization with Self-Training for Data-to-Text Generation
标题:通过自训练提高数据到文本生成的构图泛化能力
链接arxiv.org/abs/2110.0846
作者:Sanket Vaibhav Mehta,Jinfeng Rao,Yi Tay,Mihir Kale,Ankur Parikh,Hongtao Zhong,Emma Strubell
机构:Carnegie Mellon University,Google,Google Research
备注:10 pages

【8】 How Well Do You Know Your Audience? Reader-aware Question Generation
标题:你对你的观众有多了解?读者感知的问题生成
链接arxiv.org/abs/2110.0844
作者:Ian Stewart,Rada Mihalcea
机构:Computer Science and Engineering, University of Michigan

【9】 Control Prefixes for Text Generation
标题:用于文本生成的控件前缀
链接arxiv.org/abs/2110.0832
作者:Jordan Clive,Kris Cao,Marek Rei
机构:Imperial College London, DeepMind, London, UK

半/弱/无监督|不确定性(1篇)

【1】 Prioritization of COVID-19-related literature via unsupervised keyphrase extraction and document representation learning
标题:通过无监督关键词提取和文档表示学习对冠状病毒相关文献进行优先排序
链接arxiv.org/abs/2110.0887
作者:Blaž Škrlj,Marko Jukič,Nika Eržen,Senja Pollak,Nada Lavrač
机构: Joˇzef Stefan Institute, Ljubljana, Slovenia, Joˇzef Stefan International Postgraduate School, Ljubljana, Slovenia, University of Maribor, Slovenia

检测相关(1篇)

【1】 Ceasing hate withMoH: Hate Speech Detection in Hindi-English Code-Switched Language
标题:用MoH停止仇恨:印英语码转换语言中的仇恨语音检测
链接arxiv.org/abs/2110.0939
作者:Arushi Sharma,Anubha Kabra,Minni Jain
机构:Optum Global Advantage, Adobe Inc., Delhi Technological University
备注:Accepted in Elsevier Journal of Information Processing and Management. Sharma and Kabra made equal contribution

识别/分类(3篇)

【1】 Intent Classification Using Pre-Trained Embeddings For Low Resource Languages
标题:基于预训练嵌入的低资源语言意图分类
链接arxiv.org/abs/2110.0926
作者:Hemant Yadav,Akshat Gupta,Sai Krishna Rallabandi,Alan W Black,Rajiv Ratn Shah
机构:MIDAS, IIIT Delhi, India,J.P.Morgan AI Research, New York, USA, Carnegie Mellon University

【2】 Sparse Distillation: Speeding Up Text Classification by Using Bigger Models
标题:稀疏精馏:通过使用更大的模型来加速文本分类
链接arxiv.org/abs/2110.0853
作者:Qinyuan Ye,Madian Khabsa,Mike Lewis,Sinong Wang,Xiang Ren,Aaron Jaech
机构:University of Southern California, Facebook AI

【3】 Inconsistent Few-Shot Relation Classification via Cross-Attentional Prototype Networks with Contrastive Learning
标题:基于对比性学习的交叉注意原型网络不一致少射关系分类
链接arxiv.org/abs/2110.0825
作者:Hongru Wang,Zhijing Jin,Jiarun Cao,Gabriel Pui Cheong Fung,Kam-Fai Wong
机构:⋆ Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, ♣ Max Planck Institute for Intelligent Systems & ETH Z¨urich

Zero/Few/One-Shot|迁移|自适应(1篇)

【1】 A Unified Speaker Adaptation Approach for ASR
标题:一种适用于ASR的统一说话人自适应方法
链接arxiv.org/abs/2110.0854
作者:Yingzhu Zhao,Chongjia Ni,Cheung-Chi Leung,Shafiq Joty,Eng Siong Chng,Bin Ma
机构:Nanyang Technological University, Singapore, Machine Intelligence Technology, Alibaba Group
备注:Accepted by EMNLP 2021

语料库(1篇)

【1】 The Arabic Parallel Gender Corpus 2.0: Extensions and Analyses
标题:阿拉伯平行性别语料库2.0:扩展与分析
链接arxiv.org/abs/2110.0921
作者:Bashar Alhafni,Nizar Habash,Houda Bouamor
机构:Computational Approaches to Modeling Language Lab, New York University Abu Dhabi, †Carnegie Mellon University in Qatar

表征(3篇)

【1】 Deep Clustering For General-Purpose Audio Representations
标题:通用音频表示的深度聚类
链接arxiv.org/abs/2110.0889
作者:Sreyan Ghosh,Sandesh V Katta,Ashish Seth,S. Umesh
机构:† Speech Lab, Dept. of Electrical Engineering, IIT Madras, Chennai, India
备注:Submitted to ICASSP 2022

【2】 Virtual Augmentation Supported Contrastive Learning of Sentence Representations
标题:虚拟增强支持的句子表征对比学习
链接arxiv.org/abs/2110.0855
作者:Dejiao Zhang,Wei Xiao,Henghui Zhu,Xiaofei Ma,Andrew O. Arnold
机构:AWS AI Labs, New York
备注:8 pages, 3 figures, 3 tables

【3】 Probing as Quantifying the Inductive Bias of Pre-trained Representations
标题:作为量化预训练表征归纳偏差的探索
链接arxiv.org/abs/2110.0838
作者:Alexander Immer,Lucas Torroba Hennigen,Vincent Fortuin,Ryan Cotterell
机构:QETH Zürich, DUniversity of Cambridge

Word2Vec|文本|单词(3篇)

【1】 ViraPart: A Text Refinement Framework for ASR and NLP Tasks in Persian
标题:ViraPart:面向波斯语ASR和NLP任务的文本精化框架
链接arxiv.org/abs/2110.0908
作者:Narges Farokhshad,Milad Molazadeh,Saman Jamalabbasi,Hamed Babaei Giglou,Saeed Bibak

【2】 Quantifying the Task-Specific Information in Text-Based Classifications
标题:基于文本的分类中特定任务信息的量化
链接arxiv.org/abs/2110.0893
作者:Zining Zhu,Aparna Balagopalan,Marzyeh Ghassemi,Frank Rudzicz
机构:University of Toronto ,Vector Institute for Artificial Intelligence ,MIT ,Unity Health Toronto

【3】 Seeking Patterns, Not just Memorizing Procedures: Contrastive Learning for Solving Math Word Problems
标题:寻找模式,而不仅仅是背诵步骤:解决数学应用题的对比学习
链接arxiv.org/abs/2110.0846
作者:Zhongli Li,Wenxuan Zhang,Chao Yan,Qingyu Zhou,Chao Li,Hongzhi Liu,Yunbo Cao
机构:Tencent Cloud Xiaowei, Peking University

其他神经网络|深度学习|模型|建模(22篇)

【1】 Automatic Learning of Subword Dependent Model Scales
标题:子词相关模型尺度的自动学习
链接arxiv.org/abs/2110.0932
作者:Felix Meyer,Wilfried Michel,Mohammad Zeineldeen,Ralf Schlüter,Hermann Ney
机构:Human Language Technology and Pattern Recognition, Computer Science Department, RWTH Aachen University, Aachen, Germany, AppTek GmbH, Aachen, Germany
备注:submitted to ICASSP 2022

【2】 Efficient Sequence Training of Attention Models using Approximative Recombination
标题:基于近似重组的注意力模型高效序列训练
链接arxiv.org/abs/2110.0924
作者:Nils-Philipp Wynands,Wilfried Michel,Jan Rosendahl,Ralf Schlüter,Hermann Ney
机构:Human Language Technology and Pattern Recognition, Computer Science Department, RWTH Aachen University, Aachen, Germany, AppTek GmbH, Aachen, Germany
备注:submitted to ICASSP 2022

【3】 Schrödinger's Tree -- On Syntax and Neural Language Models
标题:薛定谔树--论句法和神经语言模型
链接arxiv.org/abs/2110.0888
作者:Artur Kulmizev,Joakim Nivre
机构:Uppsala University
备注:preprint, submitted to Frontiers in Artificial Intelligence: Perspectives for Natural Language Processing between AI, Linguistics and Cognitive Science

【4】 Predicting the Performance of Multilingual NLP Models
标题:多语言自然语言处理模型的性能预测
链接arxiv.org/abs/2110.0887
作者:Anirudh Srinivasan,Sunayana Sitaram,Tanuja Ganu,Sandipan Dandapat,Kalika Bali,Monojit Choudhury
机构: The University of Texas at Austin

【5】 Reminding the Incremental Language Model via Data-Free Self-Distillation
标题:通过无数据自蒸馏提醒增量语言模型
链接arxiv.org/abs/2110.0874
作者:Han Wang,Ruiliu Fu,Chengzhang Li,Xuejun Zhang,Jun Zhou,Yonghong Yan
机构: Key Laboratory of Speech Acoustics and Content Understanding, Institute of Acoustics, China, University of Chinese Academy of Sciences, Beijing, China
备注:8 pages, 5 figures

【6】 Back to Reality: Leveraging Pattern-driven Modeling to Enable Affordable Sentiment Dependency Learning
标题:回到现实:利用模式驱动的建模实现负担得起的情绪依赖学习
链接arxiv.org/abs/2110.0860
作者:Heng Yang,Biqing Zeng,Mayi Xu,Tianxing Wang
机构: School of Computer, South China Normal University, China, School of Software, South China Normal University, China, Linklogis Co.,Ltd., Shenzhen, China

【7】 On the Robustness of Reading Comprehension Models to Entity Renaming
标题:论阅读理解模型对实体重命名的稳健性
链接arxiv.org/abs/2110.0855
作者:Jun Yan,Yang Xiao,Sagnik Mukherjee,Bill Yuchen Lin,Robin Jia,Xiang Ren
机构:University of Southern California, Fudan University, IIT Kanpur

【8】 PAGnol: An Extra-Large French Generative Model
标题:Pagnol:一种超大的法语生成模式
链接arxiv.org/abs/2110.0855
作者:Julien Launay,E. L. Tommasone,Baptiste Pannier,François Boniface,Amélie Chatelain,Alessandro Cappelli,Iacopo Poli,Djamé Seddah
机构:E.L. Tommasone∗, LightOn , LPENS, École Normale Supérieure , Inria, Paris, lair.lighton.aipagnol

【9】 Learning to Solve Complex Tasks by Talking to Agents
标题:学习通过与座席交谈来解决复杂任务
链接arxiv.org/abs/2110.0854
作者:Tushar Khot,Kyle Richardson,Daniel Khashabi,Ashish Sabharwal
机构:Allen Institute for AI, Seattle, WA, U.S.A.

【10】 Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora
标题:终身预备训练:不断调整语言模型以适应新兴语料库
链接arxiv.org/abs/2110.0853
作者:Xisen Jin,Dejiao Zhang,Henghui Zhu,Wei Xiao,Shang-Wen Li,Xiaokai Wei,Andrew Arnold,Xiang Ren
机构:University of Southern California, Amazon Inc.
备注:8 pages

【11】 Sharpness-Aware Minimization Improves Language Model Generalization
标题:清晰度感知最小化改进了语言模型泛化
链接arxiv.org/abs/2110.0852
作者:Dara Bahri,Hossein Mobahi,Yi Tay
机构:Google Research, Mountain View, CA, USA

【12】 An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-Trained Language Models
标题:预训练语言模型去偏技术有效性的实证研究
链接arxiv.org/abs/2110.0852
作者:Nicholas Meade,Elinor Poole-Dayan,Siva Reddy
机构:MilaMcGill University, Facebook CIFAR AI Chair

【13】 A Good Prompt Is Worth Millions of Parameters? Low-resource Prompt-based Learning for Vision-Language Models
标题:一个好的提示符抵得上数百万个参数吗?基于低资源提示的视觉语言模型学习
链接arxiv.org/abs/2110.0848
作者:Woojeong Jin,Yu Cheng,Yelong Shen,Weizhu Chen,Xiang Ren
机构:University of Southern California, Microsoft Corporation
备注:Preprint

【14】 On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark
标题:论会话模型的安全性:分类法、数据集和基准
链接arxiv.org/abs/2110.0846
作者:Hao Sun,Guangxuan Xu,Jiawen Deng,Jiale Cheng,Chujie Zheng,Hao Zhou,Nanyun Peng,Xiaoyan Zhu,Minlie Huang
机构:The CoAI group, DCST, Institute for Artificial Intelligence, State Key Lab of Intelligent Technology and Systems, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing , China

【15】 A Short Study on Compressing Decoder-Based Language Models
标题:基于解码器的语言模型压缩初探
链接arxiv.org/abs/2110.0846
作者:Tianda Li,Yassir El Mesbahi,Ivan Kobyzev,Ahmad Rashid,Atif Mahmud,Nithin Anchuri,Habib Hajimolahoseini,Yang Liu,Mehdi Rezagholizadeh
机构:Huawei Noah’s Ark Lab

【16】 Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource NER
标题:好榜样造就更快的学习者:面向低资源学习者的简单演示学习
链接arxiv.org/abs/2110.0845
作者:Dong-Ho Lee,Mahak Agarwal,Akshen Kadakia,Jay Pujara,Xiang Ren
机构:Department of Computer Science, University of Southern California
备注:7 pages, 4 figures, 4 tables

【17】 What do Compressed Large Language Models Forget? Robustness Challenges in Model Compression
标题:压缩的大型语言模型会忘记什么?模型压缩中的健壮性挑战
链接arxiv.org/abs/2110.0841
作者:Mengnan Du,Subhabrata Mukherjee,Yu Cheng,Milad Shokouhi,Xia Hu,Ahmed Hassan Awadallah
机构:Texas A&M University ,Microsoft Research, Rice University

【18】 Training Conversational Agents with Generative Conversational Networks
标题:用产生式会话网络训练会话Agent
链接arxiv.org/abs/2110.0838
作者:Yen-Ting Lin,Alexandros Papangelis,Seokhwan Kim,Dilek Hakkani-Tur
备注:Accepted at WeCNLP 2021

【19】 Learning with Noisy Labels by Targeted Relabeling
标题:通过有针对性的重新标记在有噪声的标签下学习
链接arxiv.org/abs/2110.0835
作者:Derek Chen,Zhou Yu,Samuel R. Bowman
机构:ASAPP Inc., New York, NY, Dept. of Computer Science, Columbia University, NY, New York University, NY
备注:14 pages, 5 figures

【20】 Omni-sparsity DNN: Fast Sparsity Optimization for On-Device Streaming E2E ASR via Supernet
标题:全稀疏DNN:超网设备上流式E2E ASR的快速稀疏性优化
链接arxiv.org/abs/2110.0835
作者:Haichuan Yang,Yuan Shangguan,Dilin Wang,Meng Li,Pierce Chuang,Xiaohui Zhang,Ganesh Venkatesh,Ozlem Kalinli,Vikas Chandra
机构:Facebook AI

【21】 Boosting coherence of language models
标题:提高语言模型的连贯性
链接arxiv.org/abs/2110.0829
作者:Nikolay Malkin,Zhen Wang,Nebojsa Jojic
机构:Mila Université de Montréal, Ohio State University, Microsoft Research

【22】 ASR4REAL: An extended benchmark for speech models
标题:ASR4REAL:一种扩展的语音模型基准
链接arxiv.org/abs/2110.0858
作者:Morgane Riviere,Jade Copet,Gabriel Synnaeve
机构:†Facebook AI Research
备注:Submitted to ICASSP 2022

其他(16篇)

【1】 Measuring Cognitive Status from Speech in a Smart Home Environment
标题:在智能家居环境中从语音测量认知状态
链接arxiv.org/abs/2110.0942
作者:Kathleen C. Fraser,Majid Komeili
备注:None

【2】 LDNet: Unified Listener Dependent Modeling in MOS Prediction for Synthetic Speech
标题:LDNet:合成语音MOS预测中的统一听者相关建模
链接arxiv.org/abs/2110.0910
作者:Wen-Chin Huang,Erica Cooper,Junichi Yamagishi,Tomoki Toda
机构:Nagoya University, Japan, National Institute of Informatics, Japan
备注:Submitted to ICASSP 2022. Code available at: this https URL

【3】 Using Natural Language Processing to Understand Reasons and Motivators Behind Customer Calls in Financial Domain
标题:使用自然语言处理来理解金融领域客户拜访背后的原因和动机
链接arxiv.org/abs/2110.0909
作者:Ankit Patil,Ankush Chopra,Sohom Ghosh,Vamshi Vadla
机构:Fidelity Investments, AI CoE, Bengaluru, India
备注:Accepted at ICCMDE-2021. To be published in Springer - Lecture Notes on Data Engineering and Communications Technologies

【4】 Ranking Facts for Explaining Answers to Elementary Science Questions
标题:解释初等科学问题答案的排序事实
链接arxiv.org/abs/2110.0903
作者:Jennifer D'Souza,Isaiah Onando Mulang',Soeren Auer
机构: Germany 2University of Bonn, machine learningc⃝ Cambridge University Press 20 19
备注:25 pages, 5 figures, accepted for publication in NLE

【5】 GNN-LM: Language Modeling based on Global Contexts via GNN
标题:GNN-LM:基于GNN的基于全局上下文的语言建模
链接arxiv.org/abs/2110.0874
作者:Yuxian Meng,Shi Zong,Xiaoya Li,Xiaofei Sun,Tianwei Zhang,Fei Wu,Jiwei Li
机构:Shannon.AI,Nanjing University,Nanyang Technological University,Zhejiang University

【6】 n-stage Latent Dirichlet Allocation: A Novel Approach for LDA
标题:N阶段潜在Dirichlet分配:一种新的LDA方法
链接arxiv.org/abs/2110.0859
作者:Zekeriya Anil Guven,Banu Diri,Tolgahan Cakaloglu
机构:Department of Computer Engineering, Ege University, Izmir, Turkey, Yildiz Technical University, Istanbul, Turkey, Walmart Global Tech, Dallas, USA
备注:Published in: 2019 4th International Conference on Computer Science and Engineering (UBMK). This study is extension version of "Comparison of Topic Modeling Methods for Type Detection of Turkish News" this http URL . Please citation this IEEE paper

【7】 Tackling Multi-Answer Open-Domain Questions via a Recall-then-Verify Framework
标题:通过先召回后验证框架解决多答案开放领域问题
链接arxiv.org/abs/2110.0854
作者:Zhihong Shao,Minlie Huang
机构:The CoAI group, DCST, Tsinghua University, Institute for Artificial Intelligence;, State Key Lab of Intelligent Technology and Systems;, Beijing National Research Center for Information Science and Technology;, Tsinghua University, Beijing , China

【8】 Pro-KD: Progressive Distillation by Following the Footsteps of the Teacher
标题:PRO-KD:追随老师的脚步进行渐进式蒸馏
链接arxiv.org/abs/2110.0853
作者:Mehdi Rezagholizadeh,Aref Jafari,Puneeth Salad,Pranav Sharma,Ali Saheb Pasand,Ali Ghodsi
机构:Huawei Noah’s Ark Lab, University of Waterloo

【9】 A Dataset for Discourse Structure in Peer Review Discussions
标题:同行评议讨论中的语篇结构数据集
链接arxiv.org/abs/2110.0852
作者:Neha Nayak Kennard,Tim O'Gorman,Akshay Sharma,Chhandak Bagchi,Matthew Clinton,Pranay Kumar Yelugam,Rajarshi Das,Hamed Zamani,Andrew McCallum
机构:Tim O’Gorman

【10】 Metadata Shaping: Natural Language Annotations for the Tail
标题:元数据整形:尾部的自然语言注释
链接arxiv.org/abs/2110.0843
作者:Simran Arora,Sen Wu,Enci Liu,Christopher Re
机构:Stanford University

【11】 EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks
标题:EncT5:用于非自回归任务的微调T5编码器
链接arxiv.org/abs/2110.0842
作者:Frederick Liu,Siamak Shakeri,Hongkun Yu,Jing Li
机构:Google

【12】 Information-Theoretic Measures of Dataset Difficulty
标题:数据集难度的信息论测度
链接arxiv.org/abs/2110.0842
作者:Kawin Ethayarajh,Yejin Choi,Swabha Swayamdipta
机构:Stanford University♥, Allen Institute for Artificial Intelligence♣, Paul G. Allen School of Computer Science, University of Washington♦

【13】 Invariant Language Modeling
标题:不变量语言建模
链接arxiv.org/abs/2110.0841
作者:Maxime Peyrard,Sarvjeet Singh Ghotra,Martin Josifoski,Vidhan Agarwal,Barun Patra,Dean Carignan,Emre Kiciman,Robert West
机构:♢EPFL, ♠Microsoft Corporation

【14】 Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
标题:用递归掩蔽和再训练评估自然语言处理中重要性度量的可信性
链接arxiv.org/abs/2110.0841
作者:Andreas Madsen,Nicholas Meade,Vaibhav Adlakha,Siva Reddy
机构: Mila – Quebec AI Institute, Polytechnique Montréal, McGill, Facebook CIFAR AI Chair

【15】 DS-TOD: Efficient Domain Specialization for Task Oriented Dialog
标题:DS-TOD:面向任务对话的高效领域专门化
链接arxiv.org/abs/2110.0839
作者:Chia-Chien Hung,Anne Lauscher,Simone Paolo Ponzetto,Goran Glavaš
机构:Data and Web Science Group, University of Mannheim, Germany, MilaNLP, Bocconi University, Italy, Center for Information and Language Processing, LMU Munich, Germany

【16】 When Combating Hype, Proceed with Caution
标题:在打击炒作时,请谨慎行事。
链接arxiv.org/abs/2110.0830
作者:Samuel R. Bowman
机构:New York University

机器翻译,仅供参考


访问arxivdaily.com获取含摘要速递,更有收藏、搜索等功能,涵盖CS|物理|数学|经济|统计|金融|生物|电气领域
同步公众号:arXiv每日学术速递,欢迎关注
发布于 2021-10-19 12:10