Abstract

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.

Keywords

PolysemyComputer scienceNatural language processingArtificial intelligenceWord (group theory)SyntaxTextual entailmentSemantics (computer science)Representation (politics)LinguisticsDeep learningLogical consequenceProgramming language

Related Publications

Publication Info

Year
2018
Type
preprint
Pages
2227-2237
Citations
1786
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1786
OpenAlex

Cite This

Matthew E. Peters, Mark E Neumann, Mohit Iyyer et al. (2018). Deep Contextualized Word Representations. , 2227-2237. https://doi.org/10.18653/v1/n18-1202

Identifiers

DOI
10.18653/v1/n18-1202