Abstract

We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities.

Keywords

Word (group theory)Computer scienceSimilarity (geometry)Set (abstract data type)Vector spaceArtificial intelligenceTask (project management)Natural language processingVector space modelTest setSemantic similaritySpace (punctuation)Artificial neural networkMathematics

Affiliated Institutions

Related Publications

Finding Structure in Time

Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implici...

1990 Cognitive Science 10427 citations

Publication Info

Year
2013
Type
preprint
Citations
18006
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

18006
OpenAlex

Cite This

Tomáš Mikolov, Kai Chen, Greg S. Corrado et al. (2013). Efficient Estimation of Word Representations in Vector Space. arXiv (Cornell University) . https://doi.org/10.48550/arxiv.1301.3781

Identifiers

DOI
10.48550/arxiv.1301.3781