Theory Refinement for Bayesian Networks with Hidden Variables (1998)
While there has been a growing interest in the problem of learning Bayesian networks from data, no technique exists for learning or revising Bayesian networks with Hidden variables (i.e. variables not represented in the data), that has been shown to be efficient, effective, and scalable through evaluation on real data. The few techniques that exist for revising such networks perform a blind search through a large spaces of revisons, and are therefore computationally expensive. This paper presents BANNER, a technique for using data to revise a given bayesian network with noisy-or and noisy-and nodes, to improve its classification accuracy. The initial network can be derived directly from a logical theory expresssed as propositional rules. BANNER can revise networks with hidden variables, and add hidden variables when necessary. Unlike previous approaches, BANNER employs mechanisms similar to logical theory refinement techniques for using the data to focus the search for effective modifications. Experiments on real-world problems in the domain of molecular biology demonstrate that BANNER can effectively revise fairly large networks to significantly improve their accuracies.
View:
PDF, PS
Citation:
In Proceedings of the Fifteenth International Conference on Machine Learning (ICML-98), pp. 454--462, Madison, WI, July 1998.
Bibtex:

Raymond J. Mooney Faculty mooney [at] cs utexas edu
Sowmya Ramachandran Ph.D. Alumni sowmya [at] shai com