LibRec 精选:从0开始构建RNN网络

2019 年 5 月 31 日 LibRec智能推荐
LibRec 精选:从0开始构建RNN网络

LibRec 精选

LibRec智能推荐 第 34 期(至2019.5.31),更新 6 篇精选内容。


勇敢的人不是不落泪的人,而是愿意含着眼泪继续奔跑的人。


1

【DL基础教程】从0开始构建RNN网络(Python)

链接:https://www.analyticsvidhya.com/blog/2019/01/fundamentals-deep-learning-recurrent-neural-networks-scratch-python/




近期热点论文




1. Misspelling Oblivious Word Embeddings

Bora Edizel, Aleksandra Piktus, Piotr Bojanowski, Rui Ferreira, Edouard Grave, Fabrizio Silvestri

https://arxiv.org/abs/1905.09755v1

In this paper we present a method to learn word embeddings that are resilient to misspellings. Existing word embeddings have limited applicability to malformed texts, which contain a non-negligible amount of out-of-vocabulary words. In our method, misspellings of each word are embedded close to their correct variants.


2. Interpreting and improving natural-language processing (in machines)  with natural language-processing (in the brain)

Mariya Toneva, Leila Wehbe

https://arxiv.org/abs/1905.11833v1

Despite much work, it is still unclear what the representations learned by these networks correspond to. We propose here a novel approach for interpreting neural networks that relies on the only processing system we have that does understand language: the human brain. We use brain imaging recordings of subjects reading complex natural text to interpret word and sequence embeddings from 4 recent NLP models - ELMo, USE, BERT and Transformer-XL.


3. MatchZoo: A Learning, Practicing, and Developing System for Neural Text  Matching

Jiafeng Guo, Yixing Fan, Xiang Ji, Xueqi Cheng

https://arxiv.org/abs/1905.10289v1

Recently, deep leaning technology has been widely adopted for text matching, making neural text matching a new and active research domain. With a large number of neural matching models emerging rapidly, it becomes more and more difficult for researchers, especially those newcomers, to learn and understand these new models. In this paper, therefore, we present a novel system, namely MatchZoo, to facilitate the learning, practicing and designing of neural text matching models.


4. QuesNet: A Unified Representation for Heterogeneous Test Questions

Yu Yin, Qi Liu, Zhenya Huang, Enhong Chen, Wei Tong, Shijin Wang, Yu Su

https://arxiv.org/abs/1905.10949v1

It is a crucial issue in online learning systems, which can promote many applications in education domain. Specifically, we first design a unified framework to aggregate question information with its heterogeneous inputs into a comprehensive vector. Then we propose a two-level hierarchical pre-training algorithm to learn better understanding of test questions in an unsupervised way.


5. Compositional pre-training for neural semantic parsing

Amir Ziai

https://arxiv.org/abs/1905.11531v1

Semantic parsing is the process of translating natural language utterances into logical forms, which has many important applications such as question answering and instruction following. Prior work has used frameworks for inducing grammars over the training examples, which capture conditional independence properties that the model can leverage. In addition, since the pre-training stage is separate from the training on the main task we also expand the universe of possible augmentations without causing catastrophic inference.




登录查看更多
5

相关内容

RNN:循环神经网络,是深度学习的一种模型。

In recent years, deep neural models have been widely adopted for text matching tasks, such as question answering and information retrieval, showing improved performance as compared with previous methods. In this paper, we introduce the MatchZoo toolkit that aims to facilitate the designing, comparing and sharing of deep text matching models. Specifically, the toolkit provides a unified data preparation module for different text matching problems, a flexible layer-based model construction process, and a variety of training objectives and evaluation metrics. In addition, the toolkit has implemented two schools of representative deep text matching models, namely representation-focused models and interaction-focused models. Finally, users can easily modify existing models, create and share their own models for text matching in MatchZoo.

0
5
下载
预览
小贴士
相关资讯
LibRec 精选:你见过最有趣的论文标题是什么?
LibRec智能推荐
4+阅读 · 2019年11月6日
LibRec 精选:AutoML for Contextual Bandits
LibRec智能推荐
6+阅读 · 2019年9月19日
LibRec 精选:如何评估交互式推荐系统?
LibRec智能推荐
8+阅读 · 2019年5月5日
LibRec 精选:推荐系统的常用数据集
LibRec智能推荐
14+阅读 · 2019年2月15日
LibRec 精选:推荐系统的论文与源码
LibRec智能推荐
12+阅读 · 2018年11月29日
LibRec 精选:基于LSTM的序列推荐实现(PyTorch)
LibRec智能推荐
44+阅读 · 2018年8月27日
LibRec 精选:推荐的可解释性[综述]
LibRec智能推荐
9+阅读 · 2018年5月4日
LibRec 精选:推荐系统9个必备数据集
LibRec智能推荐
5+阅读 · 2018年3月7日
LibRec 每周精选:近期推荐系统论文及进展
LibRec智能推荐
30+阅读 · 2018年2月5日
LibRec 每周精选:10篇每个人都应该读的RecSys文章
LibRec智能推荐
5+阅读 · 2018年1月1日
相关VIP内容
一份循环神经网络RNNs简明教程,37页ppt
专知会员服务
117+阅读 · 2020年5月6日
专知会员服务
34+阅读 · 2020年4月17日
专知会员服务
46+阅读 · 2020年3月19日
【深度学习视频分析/多模态学习资源大列表】
专知会员服务
63+阅读 · 2019年10月16日
2019年机器学习框架回顾
专知会员服务
27+阅读 · 2019年10月11日
[综述]深度学习下的场景文本检测与识别
专知会员服务
45+阅读 · 2019年10月10日
机器学习入门的经验与建议
专知会员服务
47+阅读 · 2019年10月10日
TensorFlow 2.0 学习资源汇总
专知会员服务
46+阅读 · 2019年10月9日
知识图谱本体结构构建论文合集
专知会员服务
63+阅读 · 2019年10月9日
学习自然语言处理路线图
专知会员服务
69+阅读 · 2019年9月24日
相关论文
Do RNN and LSTM have Long Memory?
Jingyu Zhao,Feiqing Huang,Jia Lv,Yanjie Duan,Zhen Qin,Guodong Li,Guangjian Tian
16+阅读 · 2020年6月10日
Memory Augmented Graph Neural Networks for Sequential Recommendation
Chen Ma,Liheng Ma,Yingxue Zhang,Jianing Sun,Xue Liu,Mark Coates
12+阅读 · 2019年12月26日
Analysis Methods in Neural Language Processing: A Survey
Yonatan Belinkov,James Glass
4+阅读 · 2019年1月14日
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Zihang Dai,Zhilin Yang,Yiming Yang,William W. Cohen,Jaime Carbonell,Quoc V. Le,Ruslan Salakhutdinov
3+阅读 · 2019年1月9日
Seyed Sajad Mousavi,Michael Schukat,Enda Howley
12+阅读 · 2018年6月23日
Terra Blevins,Omer Levy,Luke Zettlemoyer
3+阅读 · 2018年5月11日
Markus Schedl,Hamed Zamani,Ching-Wei Chen,Yashar Deldjoo,Mehdi Elahi
7+阅读 · 2018年3月21日
Shuai Zhang,Lina Yao,Aixin Sun
4+阅读 · 2017年8月3日
Yixing Fan,Liang Pang,JianPeng Hou,Jiafeng Guo,Yanyan Lan,Xueqi Cheng
5+阅读 · 2017年7月23日
Kaisheng Yao,Trevor Cohn,Katerina Vylomova,Kevin Duh,Chris Dyer
4+阅读 · 2015年8月25日
Top
微信扫码咨询专知VIP会员