Abstract

Deep Belief Networks (DBNs) are multi-layer generative models. They can be trained to model windows of coefficients extracted from speech and they discover multiple layers of features that capture the higher-order statistical structure of the data. These features can be used to initialize the hidden units of a feed-forward neural network that is then trained to predict the HMM state for the central frame of the window. Initializing with features that are good at generating speech makes the neural network perform much better than initializing with random weights. DBNs have already been used successfully for phone recognition with input coefficients that are MFCCs or filterbank outputs. In this paper, we demonstrate that they work even better when their inputs are speaker adaptive, discriminative features. On the standard TIMIT corpus, they give phone error rates of 19.6% using monophone HMMs and a bigram language model and 19.4% using monophone HMMs and a trigram language model.

Keywords

Discriminative modelComputer scienceArtificial intelligencePattern recognition (psychology)PhoneSpeech recognitionDeep belief networkDeep learning

Affiliated Institutions

Related Publications

Publication Info

Year
2011
Type
article
Pages
5060-5063
Citations
289
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

289
OpenAlex

Cite This

Abdelrahman Mohamed, Tara N. Sainath, George E. Dahl et al. (2011). Deep Belief Networks using discriminative features for phone recognition. , 5060-5063. https://doi.org/10.1109/icassp.2011.5947494

Identifiers

DOI
10.1109/icassp.2011.5947494