Abstract

We introduce recurrent neural network grammars, probabilistic models of sentences with explicit phrase structure. We explain efficient inference procedures that allow application to both parsing and language modeling. Experiments show that they provide better parsing in English than any single previously published supervised generative model and better language modeling than state-of-the-art sequential RNNs in English and Chinese.

Keywords

Computer scienceArtificial intelligenceParsingRecurrent neural networkNatural language processingRule-based machine translationGenerative grammarInferencePhraseLanguage modelArtificial neural networkPhrase structure grammarContext-free grammar

Affiliated Institutions

Related Publications

ASPECTS OF THE THEORY OF SYNTAX

Abstract : Contents: Methodological preliminaries: Generative grammars as theories of linguistic competence; theory of performance; organization of a generative grammar; justifi...

1964 10932 citations

Publication Info

Year
2016
Type
preprint
Citations
84
Access
Closed

External Links

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

84
OpenAlex

Cite This

Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros et al. (2016). Recurrent Neural Network Grammars. . https://doi.org/10.18653/v1/n16-1024

Identifiers

DOI
10.18653/v1/n16-1024