Theory and Knowledge Refinement
Most machine learning does not exploit prior knowledge. Theory refinement (a.k.a. theory revision or knowledge-base refinement) is the task of modifying an initial imperfect knowledge-base (KB) to make it consistent with empirical data. The goal is to improve the performance of learning by exploiting prior knowledge, and to acquire knowledge which is more comprehensible and related to existing concepts in the domain. Another motivation is to automate the process of knowledge refinement in the development of expert systems and other knowledge-based systems.

We have developed systems for refining knowledge in various forms including:

  • Propositional logical rules: Systems: EITHER, NEITHER
  • First order logical rules (logic programs): Systems: FORTE
  • Certainty-factor rules: Systems: RAPTURE
  • Bayesian networks: Systems: BANNER
These systems have demonstrated an ability to revise real knowledge bases and improve learning in several real-world domains including:
     [Expand to show all 38][Minimize]
A Recap of Early Work onTheory and Knowledge Refinement 2021
Raymond J. Mooney, Jude W. Shavlik, In AAAI Spring Symposium on Combining Machine Learning and Knowledge Engineering, March 2021.
Online Structure Learning for Markov Logic Networks 2011
Tuyen N. Huynh and Raymond J. Mooney, In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2011), Vol. 2, pp. 81-96, September 2011.
Guiding a Reinforcement Learner with Natural Language Advice: Initial Results in RoboCup Soccer 2004
Gregory Kuhlmann, Peter Stone, Raymond J. Mooney, and Jude W. Shavlik, In The AAAI-2004 Workshop on Supervisory Control of Learning and Adaptive Systems, July 2004.
Integrating Abduction and Induction in Machine Learning 2000
Raymond J. Mooney, In Abduction and Induction, P. A. Flach and A. C. Kakas (Eds.), pp. 181-191 2000. Kluwer Academic Publishers.
Theory Refinement for Bayesian Networks with Hidden Variables 1998
Sowmya Ramachandran and Raymond J. Mooney, In Proceedings of the Fifteenth International Conference on Machine Learning (ICML-98), pp. 454--462, Madison, WI, July 1998.
Theory Refinement of Bayesian Networks with Hidden Variables 1998
Sowmya Ramachandran and Raymond J. Mooney, PhD Thesis, Department of Computer Sciences, University of Texas at Austin. 139 pages. Also appears as Technical Report AI 98-265, Artificial Intelligence Lab, University of Texas at Austin.
Integrating Abduction and Induction in Machine Learning 1997
Raymond J. Mooney, In Working Notes of the IJCAI-97 Workshop on Abduction and Induction in AI, pp. 37--42, Nagoya, Japan, August 1997.
Parameter Revision Techniques for Bayesian Networks with Hidden Variables: An Experimental Comparison 1997
Sowmya Ramachandran and Raymond J. Mooney, unpublished. Unpublished Technical Note.
A Novel Application of Theory Refinement to Student Modeling 1996
Paul Baffes and Raymond J. Mooney, In Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI-96), pp. 403-408, Portland, OR, August 1996.
Combining Symbolic and Connectionist Learning Methods to Refine Certainty-Factor Rule-Bases 1996
J. Jeffrey Mahoney, PhD Thesis, Department of Computer Sciences, University of Texas at Austin. 113 pages.
Refinement-Based Student Modeling and Automated Bug Library Construction 1996
Paul Baffes and Raymond Mooney, Journal of Artificial Intelligence in Education, Vol. 7, 1 (1996), pp. 75-116.
Revising Bayesian Network Parameters Using Backpropagation 1996
Sowmya Ramachandran and Raymond J. Mooney, In Proceedings of the International Conference on Neural Networks (ICNN-96), Special Session on Knowledge-Based Artificial Neural Networks, pp. 82--87, Washington DC, June 1996.
A Preliminary PAC Analysis of Theory Revision 1995
Raymond J. Mooney, In Computational Learning Theory and Natural Learning Systems, Vol. 3, T. Petsche and S. Hanson and Jude W. Shavlik (Eds.), pp. 43-53, Cambridge, MA 1995. MIT Press.
Automated Refinement of First-Order Horn-Clause Domain Theories 1995
Bradley L. Richards and Raymond J. Mooney, Machine Learning, Vol. 19, 2 (1995), pp. 95-131.
Refinement of Bayesian Networks by Combining Connectionist and Symbolic Techniques 1995
Sowmya Ramachandran, Unpublished Ph.D. Thesis Proposal.
A Multistrategy Approach to Theory Refinement 1994
Raymond J. Mooney and Dirk Ourston, In Machine Learning: A Multistrategy Approach, Vol. IV, Ryszard S. Michalski and G. Teccuci (Eds.), pp. 141-164, San Mateo, CA 1994. Morgan Kaufmann.
Automatic Student Modeling and Bug Library Construction using Theory Refinement 1994
Paul T. Baffes, PhD Thesis, Department of Computer Sciences, The University of Texas at Austin.
Comparing Methods For Refining Certainty Factor Rule-Bases 1994
J. Jeffrey Mahoney and Raymond J. Mooney, In Proceedings of the Eleventh International Workshop on Machine Learning (ML-94), pp. 173--180, Rutgers, NJ, July 1994.
Modifying Network Architectures For Certainty-Factor Rule-Base Revision 1994
J. Jeffrey Mahoney and Raymond J. Mooney, In Proceedings of the International Symposium on Integrating Knowledge and Neural Heuristics (ISIKNH-94), pp. 75--85, Pensacola, FL, May 1994.
Theory Refinement Combining Analytical and Empirical Methods 1994
Dirk Ourston and Raymond J. Mooney, Artificial Intelligence (1994), pp. 311-344.
Combining Connectionist and Symbolic Learning to Refine Certainty-Factor Rule-Bases 1993
J. Jeffrey Mahoney and Raymond J. Mooney, Connection Science (1993), pp. 339-364.
Extending Theory Refinement to M-of-N Rules 1993
Paul T. Baffes and Raymond J. Mooney, Informatica, Vol. 17 (1993), pp. 387-397.
Induction Over the Unexplained: Using Overly-General Domain Theories to Aid Concept Learning 1993
Raymond J. Mooney, Machine Learning, Vol. 10 (1993), pp. 79-110.
Integrating Theory and Data in Category Learning 1993
Raymond J. Mooney, In Categorization by Humans and Machines, G. V. Nakamura and D. L. Medin and R. Taraban (Eds.), pp. 189-218 1993.
Learning to Model Students: Using Theory Refinement to Detect Misconceptions 1993
Paul T. Baffes, unpublished. Ph.D. proposal, Department of Computer Sciences, University of Texas at Austin.
Symbolic Revision of Theories With M-of-N Rules 1993
Paul T. Baffes and Raymond J. Mooney, In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI-93), pp. 1135-1140, Chambery, France, August 1993.
An Operator-Based Approach to First-Order Theory Revision 1992
Bradley Lance Richards, PhD Thesis, Department of Computer Science, University of Texas at Austin.
Automated Debugging of Logic Programs via Theory Revision 1992
Raymond J. Mooney and Bradley L. Richards, In Proceedings of the Second International Workshop on Inductive Logic Programming (ILP-92), Tokyo, Japan 1992.
Batch versus Incremental Theory Refinement 1992
Raymond J. Mooney, In Proceedings of the 1992 AAAI Spring Symposium on Knowledge Assimilation, Standford, CA, March 1992.
Combining Symbolic and Neural Learning to Revise Probabilistic Theories 1992
J. Jeffrey Mahoney and Raymond J. Mooney, In Proceedings of the ML92 Workshop on Integrated Learning in Real Domains, Aberdeen, Scotland, July 1992.
Using Theory Revision to Model Students and Acquire Stereotypical Errors 1992
Paul T. Baffes and Raymond J. Mooney, In Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society, pp. 617-622, Bloomington, IN 1992.
Constructive Induction in Theory Refinement 1991
Raymond J. Mooney and Dirk Ourston, In Proceedings of the Eighth International Workshop on Machine Learning, pp. 178-182, Evanston, IL, June 1991.
First-Order Theory Revision 1991
Bradley L. Richards and Raymond J. Mooney, In Proceedings of the Eighth International Machine Learning Workshop, pp. pp. 447-451, Evanston, IL, June 1991.
Improving Shared Rules in Multiple Category Domain Theories 1991
Dirk Ourston and Raymond J. Mooney, In Proceedings of the Eighth International Workshop on Machine Learning, pp. 534-538, Evanston, IL, June 1991.
Theory Refinement with Noisy Data 1991
Raymond J. Mooney and Dirk Ourston, Technical Report AI91-153, Artificial Intelligence Laboratory, University of Texas.
Using Explanation-Based and Empirical Methods in Theory Revision 1991
Dirk Ourston, PhD Thesis, Department of Computer Science, University of Texas at Austin.
Changing the Rules: A Comprehensive Approach to Theory Refinement 1990
D. Ourston and Raymond J. Mooney, In Proceedings of the Eighth National Conference on Artificial Intelligence (AAAI-90), pp. 815-820, Boston, MA, July 1990.
Controlling Search for the Consequences of New Information during Knowledge Integration 1989
K. Murray and Bruce Porter , In Proceedings of the Sixth International Workshop on Machine Learning, pp. 290-295, Ithaca, NY, June 1989.