Department of Computer Science

Machine Learning Research Group

University of Texas at Austin Artificial Intelligence Lab

Publications: Theory and Knowledge Refinement

Most machine learning does not exploit prior knowledge. Theory refinement (a.k.a. theory revision or knowledge-base refinement) is the task of modifying an initial imperfect knowledge-base (KB) to make it consistent with empirical data. The goal is to improve the performance of learning by exploiting prior knowledge, and to acquire knowledge which is more comprehensible and related to existing concepts in the domain. Another motivation is to automate the process of knowledge refinement in the development of expert systems and other knowledge-based systems.

We have developed systems for refining knowledge in various forms including:

  • Propositional logical rules: Systems: EITHER, NEITHER
  • First order logical rules (logic programs): Systems: FORTE
  • Certainty-factor rules: Systems: RAPTURE
  • Bayesian networks: Systems: BANNER
These systems have demonstrated an ability to revise real knowledge bases and improve learning in several real-world domains including:
  1. Online Structure Learning for Markov Logic Networks
    [Details] [PDF] [Slides]
    Tuyen N. Huynh and Raymond J. Mooney
    In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2011), 81-96, September 2011.
  2. Guiding a Reinforcement Learner with Natural Language Advice: Initial Results in RoboCup Soccer
    [Details] [PDF]
    Gregory Kuhlmann, Peter Stone, Raymond J. Mooney, and Jude W. Shavlik
    In The AAAI-2004 Workshop on Supervisory Control of Learning and Adaptive Systems, July 2004.
  3. Integrating Abduction and Induction in Machine Learning
    [Details] [PDF]
    Raymond J. Mooney
    In P. A. Flach and A. C. Kakas, editors, Abduction and Induction, 181-191, 2000. Kluwer Academic Publishers.
  4. Theory Refinement for Bayesian Networks with Hidden Variables
    [Details] [PDF]
    Sowmya Ramachandran and Raymond J. Mooney
    In Proceedings of the Fifteenth International Conference on Machine Learning (ICML-98), 454--462, Madison, WI, July 1998.
  5. Theory Refinement of Bayesian Networks with Hidden Variables
    [Details] [PDF]
    Sowmya Ramachandran and Raymond J. Mooney
    PhD Thesis, Department of Computer Sciences, University of Texas at Austin, Austin, TX, May 1998. 139 pages. Also appears as Technical Report AI 98-265, Artificial Intelligence Lab, University of Texas at Austin.
  6. Integrating Abduction and Induction in Machine Learning
    [Details] [PDF]
    Raymond J. Mooney
    In Working Notes of the IJCAI-97 Workshop on Abduction and Induction in AI, 37--42, Nagoya, Japan, August 1997.
  7. Parameter Revision Techniques for Bayesian Networks with Hidden Variables: An Experimental Comparison
    [Details] [PDF]
    Sowmya Ramachandran and Raymond J. Mooney
    January 1997. Unpublished Technical Note.
  8. A Novel Application of Theory Refinement to Student Modeling
    [Details] [PDF]
    Paul Baffes and Raymond J. Mooney
    In Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI-96), 403-408, Portland, OR, August 1996.
  9. Combining Symbolic and Connectionist Learning Methods to Refine Certainty-Factor Rule-Bases
    [Details] [PDF]
    J. Jeffrey Mahoney
    PhD Thesis, Department of Computer Sciences, University of Texas at Austin, May 1996. 113 pages.
  10. Refinement-Based Student Modeling and Automated Bug Library Construction
    [Details] [PDF]
    Paul Baffes and Raymond Mooney
    Journal of Artificial Intelligence in Education, 7(1):75-116, 1996.
  11. Revising Bayesian Network Parameters Using Backpropagation
    [Details] [PDF]
    Sowmya Ramachandran and Raymond J. Mooney
    In Proceedings of the International Conference on Neural Networks (ICNN-96), Special Session on Knowledge-Based Artificial Neural Networks, 82--87, Washington DC, June 1996.
  12. Refinement of Bayesian Networks by Combining Connectionist and Symbolic Techniques
    [Details] [PDF]
    Sowmya Ramachandran
    , 1995. Unpublished Ph.D. Thesis Proposal.
  13. Automated Refinement of First-Order Horn-Clause Domain Theories
    [Details] [PDF]
    Bradley L. Richards and Raymond J. Mooney
    Machine Learning, 19(2):95-131, 1995.
  14. A Preliminary PAC Analysis of Theory Revision
    [Details] [PDF]
    Raymond J. Mooney
    In T. Petsche and S. Hanson and Jude W. Shavlik, editors, Computational Learning Theory and Natural Learning Systems, Vol. 3, 43-53, Cambridge, MA, 1995. MIT Press.
  15. Automatic Student Modeling and Bug Library Construction using Theory Refinement
    [Details] [PDF]
    Paul T. Baffes
    PhD Thesis, Department of Computer Sciences, The University of Texas at Austin, Austin, TX, 1994.
  16. Comparing Methods For Refining Certainty Factor Rule-Bases
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    In Proceedings of the Eleventh International Workshop on Machine Learning (ML-94), 173--180, Rutgers, NJ, July 1994.
  17. Modifying Network Architectures For Certainty-Factor Rule-Base Revision
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    In Proceedings of the International Symposium on Integrating Knowledge and Neural Heuristics (ISIKNH-94), 75--85, Pensacola, FL, May 1994.
  18. A Multistrategy Approach to Theory Refinement
    [Details] [PDF]
    Raymond J. Mooney and Dirk Ourston
    In Ryszard S. Michalski and G. Teccuci, editors, Machine Learning: A Multistrategy Approach, Vol. IV, 141-164, San Mateo, CA, 1994. Morgan Kaufmann.
  19. Theory Refinement Combining Analytical and Empirical Methods
    [Details] [PDF]
    Dirk Ourston and Raymond J. Mooney
    Artificial Intelligence:311-344, 1994.
  20. Extending Theory Refinement to M-of-N Rules
    [Details] [PDF]
    Paul T. Baffes and Raymond J. Mooney
    Informatica, 17:387-397, 1993.
  21. Symbolic Revision of Theories With M-of-N Rules
    [Details] [PDF]
    Paul T. Baffes and Raymond J. Mooney
    In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI-93), 1135-1140, Chambery, France, August 1993.
  22. Learning to Model Students: Using Theory Refinement to Detect Misconceptions
    [Details] [PDF]
    Paul T. Baffes
    June 1993. Ph.D. proposal, Department of Computer Sciences, University of Texas at Austin.
  23. Combining Connectionist and Symbolic Learning to Refine Certainty-Factor Rule-Bases
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    Connection Science:339-364, 1993.
  24. Integrating Theory and Data in Category Learning
    [Details] [PDF]
    Raymond J. Mooney
    In G. V. Nakamura and D. L. Medin and R. Taraban, editors, Categorization by Humans and Machines, 189-218, 1993.
  25. Induction Over the Unexplained: Using Overly-General Domain Theories to Aid Concept Learning
    [Details] [PDF]
    Raymond J. Mooney
    Machine Learning, 10:79-110, 1993.
  26. An Operator-Based Approach to First-Order Theory Revision
    [Details] [PDF]
    Bradley Lance Richards
    PhD Thesis, Department of Computer Science, University of Texas at Austin, August 1992.
  27. Using Theory Revision to Model Students and Acquire Stereotypical Errors
    [Details] [PDF]
    Paul T. Baffes and Raymond J. Mooney
    In Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society, 617-622, Bloomington, IN, 1992.
  28. Combining Symbolic and Neural Learning to Revise Probabilistic Theories
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    In Proceedings of the ML92 Workshop on Integrated Learning in Real Domains, Aberdeen, Scotland, July 1992.
  29. Automated Debugging of Logic Programs via Theory Revision
    [Details] [PDF]
    Raymond J. Mooney and Bradley L. Richards
    In Proceedings of the Second International Workshop on Inductive Logic Programming (ILP-92), Tokyo, Japan, 1992.
  30. Batch versus Incremental Theory Refinement
    [Details] [PDF]
    Raymond J. Mooney
    In Proceedings of the 1992 AAAI Spring Symposium on Knowledge Assimilation, Standford, CA, March 1992.
  31. Using Explanation-Based and Empirical Methods in Theory Revision
    [Details] [PDF]
    Dirk Ourston
    PhD Thesis, Department of Computer Science, University of Texas at Austin, 1991.
  32. First-Order Theory Revision
    [Details] [PDF]
    Bradley L. Richards and Raymond J. Mooney
    In Proceedings of the Eighth International Machine Learning Workshop, pp. 447-451, Evanston, IL, June 1991.
  33. Improving Shared Rules in Multiple Category Domain Theories
    [Details] [PDF]
    Dirk Ourston and Raymond J. Mooney
    In Proceedings of the Eighth International Workshop on Machine Learning, 534-538, Evanston, IL, June 1991.
  34. Constructive Induction in Theory Refinement
    [Details] [PDF]
    Raymond J. Mooney and Dirk Ourston
    In Proceedings of the Eighth International Workshop on Machine Learning, 178-182, Evanston, IL, June 1991.
  35. Theory Refinement with Noisy Data
    [Details] [PDF]
    Raymond J. Mooney and Dirk Ourston
    Technical Report AI91-153, Artificial Intelligence Laboratory, University of Texas, Austin, TX, March 1991.
  36. Changing the Rules: A Comprehensive Approach to Theory Refinement
    [Details] [PDF]
    D. Ourston and Raymond J. Mooney
    In Proceedings of the Eighth National Conference on Artificial Intelligence (AAAI-90), 815-820, Boston, MA, July 1990.