Abstract

The generative aspect model is an extension of the multinomial model for text that allows word probabilities to vary stochastically across docu-ments. Previous results with aspect models have been promising, but hindered by the computa-tional difficulty of carrying out inference and learning. This paper demonstrates that the sim-ple variational methods of Blei et al. (2001) can lead to inaccurate inferences and biased learning for the generative aspect model. We develop an alternative approach that leads to higher accuracy at comparable cost. An extension of Expectation-Propagation is used for inference and then em-bedded in an EM algorithm for learning. Exper-imental results are presented for both synthetic and real data sets. 1

Keywords

InferenceGenerative modelComputer scienceGenerative grammarExtension (predicate logic)Simple (philosophy)Multinomial distributionArtificial intelligenceMachine learningMathematicsEconometricsProgramming language

Affiliated Institutions

Related Publications

Publication Info

Year
2002
Type
article
Pages
352-359
Citations
441
Access
Closed

External Links

Citation Metrics

441
OpenAlex

Cite This

Thomas P. Minka, John Lafferty (2002). Expectation-propagation for the generative aspect model. , 352-359.