Learning context-free grammars with recurrent neural networks

T. Harada, O. Araki, A. Sakurai

Research output: Contribution to conferencePaperpeer-review

Abstract

The primary purpose of this work is to construct a recurrent neural network (RNN) architecture that learns context-free grammars (CFGs) with recursive rules, intending to get some insights for human language acquisition. Specifically, we are interested in how RNNs can learn recursive rules. The models proposed here constructed with two promising connectionist techniques, recursive auto-associative memory (RAAM) and simple recurrent network (SRN). RAAM learns to represent parse trees as real valued vectors, and SRN learns to parse sentences. We investigated if the RAAM/SRN model can learn to parse a language {anbn| n ≥ 1}, and two other languages generated by simple CFGs with recursively embedded phrases.

Original languageEnglish
Pages2602-2607
Number of pages6
Publication statusPublished - 2001
EventInternational Joint Conference on Neural Networks (IJCNN'01) - Washington, DC, United States
Duration: 15 Jul 200119 Jul 2001

Conference

ConferenceInternational Joint Conference on Neural Networks (IJCNN'01)
Country/TerritoryUnited States
CityWashington, DC
Period15/07/0119/07/01

Fingerprint

Dive into the research topics of 'Learning context-free grammars with recurrent neural networks'. Together they form a unique fingerprint.

Cite this