Language Models with Pre-Trained (GloVe) Word Embeddings Academic Article uri icon


  • Abstract: In this work we implement a training of a Language Model (LM), using Recurrent Neural Network (RNN) and GloVe word embeddings, introduced by Pennigton et al. in [1]. The implementation is following the general idea of training RNNs for LM tasks presented in [2], but is rather using Gated Recurrent Unit (GRU)[3] for a memory cell, and not the more commonly used LSTM [4]. Subjects: Computation and Language (cs. CL) Cite as: arXiv: 1610.03759 [cs. CL](or arXiv: 1610.03759 v2 [cs. CL] for this version) Submission history From: Victor Makarenkov [view email][v1] Wed, 12 Oct 2016 15: 53: 02 GMT (115kb, D)[v2] Sun, 5 Feb 2017 11: 24: 05 GMT (119kb, D)

publication date

  • January 1, 2016