Abstract

We propose a novel context-dependent (CD) model for large-vocabulary speech recognition (LVSR) that leverages recent advances in using deep belief networks for phone recognition. We describe a pre-trained deep neural network hidden Markov model (DNN-HMM) hybrid architecture that trains the DNN to produce a distribution over senones (tied triphone states) as its output. The deep belief network pre-training algorithm is a robust and often helpful way to initialize deep neural networks generatively that can aid in optimization and reduce generalization error. We illustrate the key components of our model, describe the procedure for applying CD-DNN-HMMs to LVSR, and analyze the effects of various modeling choices on performance. Experiments on a challenging business search dataset demonstrate that CD-DNN-HMMs can significantly outperform the conventional context-dependent Gaussian mixture model (GMM)-HMMs, with an absolute sentence accuracy improvement of 5.8% and 9.2% (or relative error reduction of 16.0% and 23.2%) over the CD-GMM-HMMs trained using the minimum phone error rate (MPE) and maximum-likelihood (ML) criteria, respectively.

Keywords

Hidden Markov modelComputer scienceSpeech recognitionWord error rateArtificial intelligenceArtificial neural networkContext (archaeology)Mixture modelDeep neural networksSentenceGeneralizationPhoneDeep learningPattern recognition (psychology)VocabularyMathematics

Affiliated Institutions

Related Publications

Publication Info

Year
2011
Type
article
Volume
20
Issue
1
Pages
30-42
Citations
3050
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

3050
OpenAlex

Cite This

George E. Dahl, Dong Yu, Li Deng et al. (2011). Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition. IEEE Transactions on Audio Speech and Language Processing , 20 (1) , 30-42. https://doi.org/10.1109/tasl.2011.2134090

Identifiers

DOI
10.1109/tasl.2011.2134090