Department of Computer Science

Machine Learning Research Group

University of Texas at Austin Artificial Intelligence Lab

Publications: Theory and Knowledge Refinement

Most machine learning does not exploit prior knowledge. Theory refinement (a.k.a. theory revision or knowledge-base refinement) is the task of modifying an initial imperfect knowledge-base (KB) to make it consistent with empirical data. The goal is to improve the performance of learning by exploiting prior knowledge, and to acquire knowledge which is more comprehensible and related to existing concepts in the domain. Another motivation is to automate the process of knowledge refinement in the development of expert systems and other knowledge-based systems.

We have developed systems for refining knowledge in various forms including:

  • Propositional logical rules: Systems: EITHER, NEITHER
  • First order logical rules (logic programs): Systems: FORTE
  • Certainty-factor rules: Systems: RAPTURE
  • Bayesian networks: Systems: BANNER
These systems have demonstrated an ability to revise real knowledge bases and improve learning in several real-world domains including:
  1. A Recap of Early Work onTheory and Knowledge Refinement
    [Details] [PDF] [Slides (PPT)]
    Raymond J. Mooney, Jude W. Shavlik
    In AAAI Spring Symposium on Combining Machine Learning and Knowledge Engineering, March 2021.
  2. Online Structure Learning for Markov Logic Networks
    [Details] [PDF] [Slides (PPT)]
    Tuyen N. Huynh and Raymond J. Mooney
    In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2011), 81-96, September 2011.
  3. Guiding a Reinforcement Learner with Natural Language Advice: Initial Results in RoboCup Soccer
    [Details] [PDF]
    Gregory Kuhlmann, Peter Stone, Raymond J. Mooney, and Jude W. Shavlik
    In The AAAI-2004 Workshop on Supervisory Control of Learning and Adaptive Systems, July 2004.
  4. Integrating Abduction and Induction in Machine Learning
    [Details] [PDF]
    Raymond J. Mooney
    In P. A. Flach and A. C. Kakas, editors, Abduction and Induction, 181-191, 2000. Kluwer Academic Publishers.
  5. Theory Refinement for Bayesian Networks with Hidden Variables
    [Details] [PDF]
    Sowmya Ramachandran and Raymond J. Mooney
    In Proceedings of the Fifteenth International Conference on Machine Learning (ICML-98), 454--462, Madison, WI, July 1998.
  6. Theory Refinement of Bayesian Networks with Hidden Variables
    [Details] [PDF]
    Sowmya Ramachandran and Raymond J. Mooney
    PhD Thesis, Department of Computer Sciences, University of Texas at Austin, Austin, TX, May 1998. 139 pages. Also appears as Technical Report AI 98-265, Artificial Intelligence Lab, University of Texas at Austin.
  7. Integrating Abduction and Induction in Machine Learning
    [Details] [PDF]
    Raymond J. Mooney
    In Working Notes of the IJCAI-97 Workshop on Abduction and Induction in AI, 37--42, Nagoya, Japan, August 1997.
  8. Parameter Revision Techniques for Bayesian Networks with Hidden Variables: An Experimental Comparison
    [Details] [PDF]
    Sowmya Ramachandran and Raymond J. Mooney
    January 1997. Unpublished Technical Note.
  9. A Novel Application of Theory Refinement to Student Modeling
    [Details] [PDF]
    Paul Baffes and Raymond J. Mooney
    In Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI-96), 403-408, Portland, OR, August 1996.
  10. Combining Symbolic and Connectionist Learning Methods to Refine Certainty-Factor Rule-Bases
    [Details] [PDF]
    J. Jeffrey Mahoney
    PhD Thesis, Department of Computer Sciences, University of Texas at Austin, May 1996. 113 pages.
  11. Refinement-Based Student Modeling and Automated Bug Library Construction
    [Details] [PDF]
    Paul Baffes and Raymond Mooney
    Journal of Artificial Intelligence in Education, 7(1):75-116, 1996.
  12. Revising Bayesian Network Parameters Using Backpropagation
    [Details] [PDF]
    Sowmya Ramachandran and Raymond J. Mooney
    In Proceedings of the International Conference on Neural Networks (ICNN-96), Special Session on Knowledge-Based Artificial Neural Networks, 82--87, Washington DC, June 1996.
  13. Refinement of Bayesian Networks by Combining Connectionist and Symbolic Techniques
    [Details] [PDF]
    Sowmya Ramachandran
    1995. Unpublished Ph.D. Thesis Proposal.
  14. Automated Refinement of First-Order Horn-Clause Domain Theories
    [Details] [PDF]
    Bradley L. Richards and Raymond J. Mooney
    Machine Learning, 19(2):95-131, 1995.
  15. A Preliminary PAC Analysis of Theory Revision
    [Details] [PDF]
    Raymond J. Mooney
    In T. Petsche and S. Hanson and Jude W. Shavlik, editors, Computational Learning Theory and Natural Learning Systems, Vol. 3, 43-53, Cambridge, MA, 1995. MIT Press.
  16. Automatic Student Modeling and Bug Library Construction using Theory Refinement
    [Details] [PDF]
    Paul T. Baffes
    PhD Thesis, Department of Computer Sciences, The University of Texas at Austin, Austin, TX, 1994.
  17. Comparing Methods For Refining Certainty Factor Rule-Bases
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    In Proceedings of the Eleventh International Workshop on Machine Learning (ML-94), 173--180, Rutgers, NJ, July 1994.
  18. Modifying Network Architectures For Certainty-Factor Rule-Base Revision
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    In Proceedings of the International Symposium on Integrating Knowledge and Neural Heuristics (ISIKNH-94), 75--85, Pensacola, FL, May 1994.
  19. A Multistrategy Approach to Theory Refinement
    [Details] [PDF]
    Raymond J. Mooney and Dirk Ourston
    In Ryszard S. Michalski and G. Teccuci, editors, Machine Learning: A Multistrategy Approach, Vol. IV, 141-164, San Mateo, CA, 1994. Morgan Kaufmann.
  20. Theory Refinement Combining Analytical and Empirical Methods
    [Details] [PDF]
    Dirk Ourston and Raymond J. Mooney
    Artificial Intelligence:311-344, 1994.
  21. Extending Theory Refinement to M-of-N Rules
    [Details] [PDF]
    Paul T. Baffes and Raymond J. Mooney
    Informatica, 17:387-397, 1993.
  22. Symbolic Revision of Theories With M-of-N Rules
    [Details] [PDF]
    Paul T. Baffes and Raymond J. Mooney
    In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI-93), 1135-1140, Chambery, France, August 1993.
  23. Learning to Model Students: Using Theory Refinement to Detect Misconceptions
    [Details] [PDF]
    Paul T. Baffes
    June 1993. Ph.D. proposal, Department of Computer Sciences, University of Texas at Austin.
  24. Combining Connectionist and Symbolic Learning to Refine Certainty-Factor Rule-Bases
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    Connection Science:339-364, 1993.
  25. Integrating Theory and Data in Category Learning
    [Details] [PDF]
    Raymond J. Mooney
    In G. V. Nakamura and D. L. Medin and R. Taraban, editors, Categorization by Humans and Machines, 189-218, 1993.
  26. Induction Over the Unexplained: Using Overly-General Domain Theories to Aid Concept Learning
    [Details] [PDF]
    Raymond J. Mooney
    Machine Learning, 10:79-110, 1993.
  27. An Operator-Based Approach to First-Order Theory Revision
    [Details] [PDF]
    Bradley Lance Richards
    PhD Thesis, Department of Computer Science, University of Texas at Austin, August 1992.
  28. Using Theory Revision to Model Students and Acquire Stereotypical Errors
    [Details] [PDF]
    Paul T. Baffes and Raymond J. Mooney
    In Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society, 617-622, Bloomington, IN, 1992.
  29. Combining Symbolic and Neural Learning to Revise Probabilistic Theories
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    In Proceedings of the ML92 Workshop on Integrated Learning in Real Domains, Aberdeen, Scotland, July 1992.
  30. Automated Debugging of Logic Programs via Theory Revision
    [Details] [PDF]
    Raymond J. Mooney and Bradley L. Richards
    In Proceedings of the Second International Workshop on Inductive Logic Programming (ILP-92), Tokyo, Japan, 1992.
  31. Batch versus Incremental Theory Refinement
    [Details] [PDF]
    Raymond J. Mooney
    In Proceedings of the 1992 AAAI Spring Symposium on Knowledge Assimilation, Standford, CA, March 1992.
  32. Using Explanation-Based and Empirical Methods in Theory Revision
    [Details] [PDF]
    Dirk Ourston
    PhD Thesis, Department of Computer Science, University of Texas at Austin, 1991.
  33. First-Order Theory Revision
    [Details] [PDF]
    Bradley L. Richards and Raymond J. Mooney
    In Proceedings of the Eighth International Machine Learning Workshop, pp. 447-451, Evanston, IL, June 1991.
  34. Improving Shared Rules in Multiple Category Domain Theories
    [Details] [PDF]
    Dirk Ourston and Raymond J. Mooney
    In Proceedings of the Eighth International Workshop on Machine Learning, 534-538, Evanston, IL, June 1991.
  35. Constructive Induction in Theory Refinement
    [Details] [PDF]
    Raymond J. Mooney and Dirk Ourston
    In Proceedings of the Eighth International Workshop on Machine Learning, 178-182, Evanston, IL, June 1991.
  36. Theory Refinement with Noisy Data
    [Details] [PDF]
    Raymond J. Mooney and Dirk Ourston
    Technical Report AI91-153, Artificial Intelligence Laboratory, University of Texas, Austin, TX, March 1991.
  37. Changing the Rules: A Comprehensive Approach to Theory Refinement
    [Details] [PDF]
    D. Ourston and Raymond J. Mooney
    In Proceedings of the Eighth National Conference on Artificial Intelligence (AAAI-90), 815-820, Boston, MA, July 1990.