Parameter Revision Techniques for Bayesian Networks with Hidden Variables: An Experimental Comparison (1997)
Learning Bayesian networks inductively in the presence of hidden variables is still an open problem. Even the simpler task of learning just the conditional probabilities on a Bayesian network with hidden variables is not completely solved. In this paper, we present an approach that learns the parameters of a Bayesian network composed of noisy-or and noisy-and nodes by using a gradient descent back-propagation approach similar to that used to train neural networks. For the task of causal inference, it has the advantage of being able to learn in the presence of hidden variables. We compare the performance of this approach with the adaptive probabilistic networks technique on a real-world classification problem in molecular biology, and show that our approach trains faster and learns networks with higher classification accuracy.
View:
PDF, PS
Citation:
unpublished. Unpublished Technical Note.
Bibtex:

Raymond J. Mooney Faculty mooney [at] cs utexas edu
Sowmya Ramachandran Ph.D. Alumni sowmya [at] shai com