Abstract

We contribute 5-gram counts and language models trained on the Common Crawl corpus, a collection over 9 billion web pages. This release improves upon the Google n-gram counts in two key ways: the inclusion of low-count entries and deduplication to reduce boilerplate. By preserving singletons, we were able to use Kneser-Ney smoothing to build large language models. This paper describes how the corpus was processed with emphasis on the problems that arise in working with data at this scale. Our unpruned Kneser-Ney English 5-gram language model, built on 975 billion deduplicated tokens, contains over 500 billion unique n-grams. We show gains of 0.5–1.4 BLEU by using large language models to translate into various languages.

Keywords

n-gramLanguage modelComputer scienceBoilerplate textNatural language processingGramKey (lock)Artificial intelligenceWorld Wide WebProgramming languageOperating system

Affiliated Institutions

Related Publications

Publication Info

Year
2014
Type
article
Pages
3579-3584
Citations
145
Access
Closed

External Links

Citation Metrics

145
OpenAlex

Cite This

Christian Buck, Kenneth Heafield, Bas van Ooyen (2014). N-gram Counts and Language Models from the Common Crawl. , 3579-3584.