TY - JOUR
T1 - Learning Mixture Models with the Regularized Latent Maximum Entropy Principle
Y1 - 2004
A1 - D. Schuurmans
A1 - F. Peng
A1 - Y. Zhao
A1 - Shaojun Wang
AB - We present a new approach to estimating mixture models based on a new inference principle we have proposed: the latent maximum entropy principle (LME). LME is different both from Jaynes' maximum entropy principle and from standard maximum likelihood estimation. We demonstrate the LME principle by deriving new algorithms for mixture model estimation, and show how robust new variants of the EM algorithm can be developed. Our experiments show that estimation based on LME generally yields better results than maximum likelihood estimation, particularly when inferring latent variable models from small amounts of data.
ER -
TY - CONF
T1 - Learning Mixture Models with the Latent Maximum Entropy Principle
T2 - Learning Mixture Models with the Latent Maximum Entropy Principle
Y1 - 2003
A1 - Y. Zhao
A1 - Shaojun Wang
A1 - F. Peng
A1 - D. Schuurmans
JA - Learning Mixture Models with the Latent Maximum Entropy Principle
ER -
TY - CONF
T1 - The Latent Maximum Entropy Principle
T2 - The Latent Maximum Entropy Principle
Y1 - 2002
A1 - Shaojun Wang
A1 - Y. Zhao
A1 - D. Schuurmans
A1 - R. Rosenfeld
JA - The Latent Maximum Entropy Principle
ER -
TY - CONF
T1 - Latent Maximum Entropy Principle for Statistical Language Modeling
Y1 - 2001
A1 - Shaojun Wang
A1 - Y. Zhao
A1 - R. Rosenfeld
ER -