Abstract

Abstract: We introduce a new test of how well language models capture meaning in children's books. Unlike standard language modelling benchmarks, it distinguishes the task of predicting syntactic function words from that of predicting lower-frequency words, which carry greater semantic content. We compare a range of state-of-the-art models, each with a different way of encoding what has been previously read. We show that models which store explicit representations of long-term contexts outperform state-of-the-art neural language models at predicting semantic content words, although this advantage is not observed for syntactic function words. Interestingly, we find that the amount of text encoded in a single memory representation is highly influential to the performance: there is a sweet-spot, not too big and not too small, between single words and full sentences that allows the most meaningful information in a text to be effectively retained and recalled. Further, the attention over such window-based memories can be trained effectively through self-supervision. We then assess the generality of this principle by applying it to the CNN QA benchmark, which involves identifying named entities in paraphrased summaries of news articles, and achieve state-of-the-art performance.

Keywords

Computer scienceGeneralityNatural language processingArtificial intelligenceReading (process)Encoding (memory)Benchmark (surveying)Language modelTask (project management)Meaning (existential)Function (biology)Representation (politics)Goldilocks principleLinguisticsPsychology

Affiliated Institutions

Related Publications

Skip-Thought Vectors

We describe an approach for unsupervised learning of a generic, distributed sentence encoder. Using the continuity of text from books, we train an encoder-decoder model that tri...

2015 arXiv (Cornell University) 723 citations

Publication Info

Year
2016
Type
article
Citations
307
Access
Closed

External Links

Citation Metrics

307
OpenAlex

Cite This

Felix Hill, Antoine Bordes, Sumit Chopra et al. (2016). The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations. arXiv (Cornell University) .