Department of Computer Science

Machine Learning Research Group

University of Texas at Austin Artificial Intelligence Lab

Publications: Theory and Knowledge Refinement

Most machine learning does not exploit prior knowledge. Theory refinement (a.k.a. theory revision or knowledge-base refinement) is the task of modifying an initial imperfect knowledge-base (KB) to make it consistent with empirical data. The goal is to improve the performance of learning by exploiting prior knowledge, and to acquire knowledge which is more comprehensible and related to existing concepts in the domain. Another motivation is to automate the process of knowledge refinement in the development of expert systems and other knowledge-based systems.

We have developed systems for refining knowledge in various forms including:

  • Propositional logical rules: Systems: EITHER, NEITHER
  • First order logical rules (logic programs): Systems: FORTE
  • Certainty-factor rules: Systems: RAPTURE
  • Bayesian networks: Systems: BANNER
These systems have demonstrated an ability to revise real knowledge bases and improve learning in several real-world domains including:
  1. A Recap of Early Work onTheory and Knowledge Refinement
    [Details] [PDF] [Slides (PPT)]
    Raymond J. Mooney, Jude W. Shavlik
    In AAAI Spring Symposium on Combining Machine Learning and Knowledge Engineering, March 2021.
    A variety of research on theory and knowledge refinement that integrated knowledge engineering and machine learning was conducted in the 1990's. This work developed a variety of techniques for taking engineer knowledge in the form of propositional or first-order logical rule bases and revising them to fit empirical data using symbolic, probabilistic, and/or neural-network learning methods. We review this work to provide historical context for expanding these techniques to integrate modern knowledge engineering and machine learning methods.
    ML ID: 392
  2. Online Structure Learning for Markov Logic Networks
    [Details] [PDF] [Slides (PPT)]
    Tuyen N. Huynh and Raymond J. Mooney
    In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2011), 81-96, September 2011.
    Most existing learning methods for Markov Logic Networks (MLNs) use batch training, which becomes computationally expensive and eventually infeasible for large datasets with thousands of training examples which may not even all fit in main memory. To address this issue, previous work has used online learning to train MLNs. However, they all assume that the model's structure (set of logical clauses) is given, and only learn the model's parameters. However, the input structure is usually incomplete, so it should also be updated. In this work, we present OSL-the first algorithm that performs both online structure and parameter learning for MLNs. Experimental results on two real- world datasets for natural-language field segmentation show that OSL outperforms systems that cannot revise structure.
    ML ID: 267
  3. Guiding a Reinforcement Learner with Natural Language Advice: Initial Results in RoboCup Soccer
    [Details] [PDF]
    Gregory Kuhlmann, Peter Stone, Raymond J. Mooney, and Jude W. Shavlik
    In The AAAI-2004 Workshop on Supervisory Control of Learning and Adaptive Systems, July 2004.
    We describe our current efforts towards creating a reinforcement learner that learns both from reinforcements provided by its environment and from human-generated advice. Our research involves two complementary components: (a) mapping advice expressed in English to a formal advice language and (b) using advice expressed in a formal notation in a reinforcement learner. We use a subtask of the challenging RoboCup simulated soccer task as our testbed.
    ML ID: 151
  4. Integrating Abduction and Induction in Machine Learning
    [Details] [PDF]
    Raymond J. Mooney
    In P. A. Flach and A. C. Kakas, editors, Abduction and Induction, 181-191, 2000. Kluwer Academic Publishers.
    This article discusses the integration of traditional abductive and inductive reasoning methods in the development of machine learning systems. In particular, it reviews our recent work in two areas: 1) The use of traditional abductive methods to propose revisions during theory refinement, where an existing knowledge base is modified to make it consistent with a set of empirical data; and 2) The use of inductive learning methods to automatically acquire from examples a diagnostic knowledge base used for abductive reasoning. Experimental results on real-world problems are presented to illustrate the capabilities of both of these approaches to integrating the two forms of reasoning.
    ML ID: 97
  5. Theory Refinement for Bayesian Networks with Hidden Variables
    [Details] [PDF]
    Sowmya Ramachandran and Raymond J. Mooney
    In Proceedings of the Fifteenth International Conference on Machine Learning (ICML-98), 454--462, Madison, WI, July 1998.
    While there has been a growing interest in the problem of learning Bayesian networks from data, no technique exists for learning or revising Bayesian networks with Hidden variables (i.e. variables not represented in the data), that has been shown to be efficient, effective, and scalable through evaluation on real data. The few techniques that exist for revising such networks perform a blind search through a large spaces of revisons, and are therefore computationally expensive. This paper presents BANNER, a technique for using data to revise a given bayesian network with noisy-or and noisy-and nodes, to improve its classification accuracy. The initial network can be derived directly from a logical theory expresssed as propositional rules. BANNER can revise networks with hidden variables, and add hidden variables when necessary. Unlike previous approaches, BANNER employs mechanisms similar to logical theory refinement techniques for using the data to focus the search for effective modifications. Experiments on real-world problems in the domain of molecular biology demonstrate that BANNER can effectively revise fairly large networks to significantly improve their accuracies.
    ML ID: 87
  6. Theory Refinement of Bayesian Networks with Hidden Variables
    [Details] [PDF]
    Sowmya Ramachandran and Raymond J. Mooney
    PhD Thesis, Department of Computer Sciences, University of Texas at Austin, Austin, TX, May 1998. 139 pages. Also appears as Technical Report AI 98-265, Artificial Intelligence Lab, University of Texas at Austin.
    Research in theory refinement has shown that biasing a learner with initial, approximately correct knowledge produces more accurate results than learning from data alone. While techniques have been developed to revise logical and connectionist representations, little has been done to revise probabilistic representations. Bayesian networks are well-established as a sound formalism for representing and reasoning with probabilistic knowledge, and are widely used. There has been a growing interest in the problem of learning Bayesian networks from data. However, there is no existing technique for learning or revising Bayesian networks with hidden variables (i.e. variables not represented in the data) that has been shown to be efficient, effective, and scalable through evaluation on real data. The few techniques that exist for revising such networks perform a blind search through a large space of revisions, and are therefore computationally expensive. This dissertation presents Banner, a technique for using data to revise a giv en Bayesian network with Noisy-Or and Noisy-And nodes, to improve its classification accuracy. Additionally, the initial netwrk can be derived directly from a logical theory expressed as propositional Horn-clause rules. Banner can revise networks with hidden variables, and add hidden variables when necessary. Unlike previous approaches to this problem, Banner employs mechanisms similar to those used in logical theory refinement techniques for using the data to focus the search for effective modifications to the network. It can also be used to learn networks with hidden variables from data alone. We also introduce Banner-Pr, a technique for revising the parameters of a Bayesian network with Noisy-Or/And nodes, that directly exploits the computational efficiency afforded by these models. Experiments on several real-world learning problems in domains such as molecular biology and intelligent tutoring systems demonstrate that Banner can effectively and efficiently revise networks to significantly improve their accuracies, and thus learn highly accurate classifiers. Comparisons with the Naive Bayes algorithm show that using the theory refinement approach gives Banner a substantial edge over learning from data alone. We also show that Banner-Pr converges faster and produces more accurate classifiers than an existing algorithm for learning the parameters of a network.
    ML ID: 84
  7. Integrating Abduction and Induction in Machine Learning
    [Details] [PDF]
    Raymond J. Mooney
    In Working Notes of the IJCAI-97 Workshop on Abduction and Induction in AI, 37--42, Nagoya, Japan, August 1997.
    This paper discusses the integration of traditional abductive and inductive reasoning methods in the development of machine learning systems. In particular, the paper discusses our recent work in two areas: 1) The use of traditional abductive methods to propose revisions during theory refinement, where an existing knowledge base is modified to make it consistent with a set of empirical data; and 2) The use of inductive learning methods to automatically acquire from examples a diagnostic knowledge base used for abductive reasoning.
    ML ID: 79
  8. Parameter Revision Techniques for Bayesian Networks with Hidden Variables: An Experimental Comparison
    [Details] [PDF]
    Sowmya Ramachandran and Raymond J. Mooney
    January 1997. Unpublished Technical Note.
    Learning Bayesian networks inductively in the presence of hidden variables is still an open problem. Even the simpler task of learning just the conditional probabilities on a Bayesian network with hidden variables is not completely solved. In this paper, we present an approach that learns the parameters of a Bayesian network composed of noisy-or and noisy-and nodes by using a gradient descent back-propagation approach similar to that used to train neural networks. For the task of causal inference, it has the advantage of being able to learn in the presence of hidden variables. We compare the performance of this approach with the adaptive probabilistic networks technique on a real-world classification problem in molecular biology, and show that our approach trains faster and learns networks with higher classification accuracy.
    ML ID: 70
  9. A Novel Application of Theory Refinement to Student Modeling
    [Details] [PDF]
    Paul Baffes and Raymond J. Mooney
    In Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI-96), 403-408, Portland, OR, August 1996.
    Theory refinement systems developed in machine learning automatically modify a knowledge base to render it consistent with a set of classified training examples. We illustrate a novel application of these techniques to the problem of constructing a student model for an intelligent tutoring system (ITS). Our approach is implemented in an ITS authoring system called Assert which uses theory refinement to introduce errors into an initially correct knowledge base so that it models incorrect student behavior. The efficacy of the approach has been demonstrated by evaluating a tutor developed with Assert with 75 students tested on a classification task covering concepts from an introductory course on the C++ programming language. The system produced reasonably accurate models and students who received feedback based on these models performed significantly better on a post test than students who received simple reteaching.
    ML ID: 65
  10. Combining Symbolic and Connectionist Learning Methods to Refine Certainty-Factor Rule-Bases
    [Details] [PDF]
    J. Jeffrey Mahoney
    PhD Thesis, Department of Computer Sciences, University of Texas at Austin, May 1996. 113 pages.
    This research describes the system RAPTURE, which is designed to revise rule bases expressed in certainty-factor format. Recent studies have shown that learning is facilitated when biased with domain-specific expertise, and have also shown that many real-world domains require some form of probabilistic or uncertain reasoning in order to successfully represent target concepts. RAPTURE was designed to take advantage of both of these results.

    Beginning with a set of certainty-factor rules, along with accurately-labelled training examples, RAPTURE makes use of both symbolic and connectionist learning techniques for revising the rules, in order that they correctly classify all of the training examples. A modified version of backpropagation is used to adjust the certainty factors of the rules, ID3's information-gain heuristic is used to add new rules, and the Upstart algorithm is used to create new hidden terms in the rule base.

    Results on refining four real-world rule bases are presented that demonstrate the effectiveness of this combined approach. Two of these rule bases were designed to identify particular areas in strands of DNA, one is for identifying infectious diseases, and the fourth attempts to diagnose soybean diseases. The results of RAPTURE are compared with those of backpropagation, C4.5, KBANN, and other learning systems. RAPTURE generally produces sets of rules that are more accurate that these other systems, often creating smaller sets of rules and using less training time.

    ML ID: 61
  11. Refinement-Based Student Modeling and Automated Bug Library Construction
    [Details] [PDF]
    Paul Baffes and Raymond Mooney
    Journal of Artificial Intelligence in Education, 7(1):75-116, 1996.
    A critical component of model-based intelligent tutoring sytems is a mechanism for capturing the conceptual state of the student, which enables the system to tailor its feedback to suit individual strengths and weaknesses. To be useful such a modeling technique must be practical, in the sense that models are easy to construct, and effective, in the sense that using the model actually impacts student learning. This research presents a new student modeling technique which can automatically capture novel student errors using only correct domain knowledge, and can automatically compile trends across multiple student models. This approach has been implemented as a computer program, ASSERT, using a machine learning technique called theory refinement, which is a method for automatically revising a knowledge base to be consistent with a set of examples. Using a knowledge base that correctly defines a domain and examples of a student's behavior in that domain, ASSERT models student errors by collecting any refinements to the correct knowledege base which are necessary to account for the student's behavior. The efficacy of the approach has been demonstrated by evaluating ASSERT using 100 students tested on a classification task covering concepts from an introductory course on the C++ programming language. Students who received feedback based on the models automatically generated by ASSERT performed significantly better on a post test than students who received simple teaching.
    ML ID: 59
  12. Revising Bayesian Network Parameters Using Backpropagation
    [Details] [PDF]
    Sowmya Ramachandran and Raymond J. Mooney
    In Proceedings of the International Conference on Neural Networks (ICNN-96), Special Session on Knowledge-Based Artificial Neural Networks, 82--87, Washington DC, June 1996.
    The problem of learning Bayesian networks with hidden variables is known to be a hard problem. Even the simpler task of learning just the conditional probabilities on a Bayesian network with hidden variables is hard. In this paper, we present an approach that learns the conditional probabilities on a Bayesian network with hidden variables by transforming it into a multi-layer feedforward neural network (ANN). The conditional probabilities are mapped onto weights in the ANN, which are then learned using standard backpropagation techniques. To avoid the problem of exponentially large ANNs, we focus on Bayesian networks with noisy-or and noisy-and nodes. Experiments on real world classification problems demonstrate the effectiveness of our technique.
    ML ID: 58
  13. Refinement of Bayesian Networks by Combining Connectionist and Symbolic Techniques
    [Details] [PDF]
    Sowmya Ramachandran
    1995. Unpublished Ph.D. Thesis Proposal.
    Bayesian networks provide a mathematically sound formalism for representing and reasoning with uncertain knowledge and are as such widely used. However, acquiring and capturing knowledge in this framework is difficult. There is a growing interest in formulating techniques for learning Bayesian networks inductively. While the problem of learning a Bayesian network, given complete data, has been explored in some depth, the problem of learning networks with unobserved causes is still open. In this proposal, we view this problem from the perspective of theory revision and present a novel approach which adapts techniques developed for revising theories in symbolic and connectionist representations. Thus, we assume that the learner is given an initial approximate network (usually obtained from a expert). Our technique inductively revises the network to fit the data better. Our proposed system has two components: one component revises the parameters of a Bayesian network of known structure, and the other component revises the structure of the network. The component for parameter revision maps the given Bayesian network into a multi-layer feedforward neural network, with the parameters mapped to weights in the neural network, and uses standard backpropagation techniques to learn the weights. The structure revision component uses qualitative analysis to suggest revisions to the network when it fails to predict the data accurately. The first component has been implemented and we will present results from experiments on real world classification problems which show our technique to be effective. We will also discuss our proposed structure revision algorithm, our plans for experiments to evaluate the system, as well as some extensions to the system.
    ML ID: 51
  14. Automated Refinement of First-Order Horn-Clause Domain Theories
    [Details] [PDF]
    Bradley L. Richards and Raymond J. Mooney
    Machine Learning, 19(2):95-131, 1995.
    Knowledge acquisition is a difficult and time-consuming task, and as error-prone as any human activity. The task of automatically improving an existing knowledge base using learning methods is addressed by a new class of systems performing theory refinement. Until recently, such systems were limited to propositional theories. This paper presents a system, FORTE (First-Order Revision of Theories from Examples), for refining first-order Horn-clause theories. Moving to a first-order representation opens many new problem areas, such as logic program debugging and qualitative modelling, that are beyond the reach of propositional systems. FORTE uses a hill-climbing approach to revise theories. It identifies possible errors in the theory and calls on a library of operators to develop possible revisions. The best revision is implemented, and the process repeats until no further revisions are possible. Operators are drawn from a variety of sources, including propositional theory refinement, first-order induction, and inverse resolution. FORTE has been tested in several domains including logic programming and qualitative modelling.
    ML ID: 43
  15. A Preliminary PAC Analysis of Theory Revision
    [Details] [PDF]
    Raymond J. Mooney
    In T. Petsche and S. Hanson and Jude W. Shavlik, editors, Computational Learning Theory and Natural Learning Systems, Vol. 3, 43-53, Cambridge, MA, 1995. MIT Press.
    This paper presents a preliminary analysis of the sample complexity of theory revision within the framework of PAC (Probably Approximately Correct) learnability theory. By formalizing the notion that the initial theory is ``close'' to the correct theory we show that the sample complexity of an optimal propositional Horn-clause theory revision algorithm is $O( ( ln 1 / delta + d ln (s_0 + d + n) ) / epsilon)$, where $d$ is the syntactic distance between the initial and correct theories, $s_0$ is the size of initial theory, $n$ is the number of observable features, and $epsilon$ and $delta$ are the standard PAC error and probability bounds. The paper also discusses the problems raised by the computational complexity of theory revision.
    ML ID: 41
  16. Automatic Student Modeling and Bug Library Construction using Theory Refinement
    [Details] [PDF]
    Paul T. Baffes
    PhD Thesis, Department of Computer Sciences, The University of Texas at Austin, Austin, TX, 1994.
    The history of computers in education can be characterized by a continuing effort to construct intelligent tutorial programs which can adapt to the individual needs of a student in a one-on-one setting. A critical component of these intelligent tutorials is a mechanism for modeling the conceptual state of the student so that the system is able to tailor its feedback to suit individual strengths and weaknesses. The primary contribution of this research is a new student modeling technique which can automatically capture novel student errors using only correct domain knowledge, and can automatically compile trends across multiple student models into bug libraries. This approach has been implemented as a computer program, ASSERT, using a machine learning technique called theory refinement which is a method for automatically revising a knowledge base to be consistent with a set of examples. Using a knowledge base that correctly defines a domain and examples of a student's behavior in that domain, ASSERT models student errors by collecting any refinements to the correct knowledge base which are necessary to account for the student's behavior. The efficacy of the approach has been demonstrated by evaluating ASSERT using 100 students tested on a classification task using concepts from an introductory course on the C++ programming language. Students who received feedback based on the models automatically generated by ASSERT performed significantly better on a post test than students who received simple reteaching.
    ML ID: 40
  17. Comparing Methods For Refining Certainty Factor Rule-Bases
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    In Proceedings of the Eleventh International Workshop on Machine Learning (ML-94), 173--180, Rutgers, NJ, July 1994.
    This paper compares two methods for refining uncertain knowledge bases using propositional certainty-factor rules. The first method, implemented in the RAPTURE system, employs neural-network training to refine the certainties of existing rules but uses a symbolic technique to add new rules. The second method, based on the one used in the KBANN system, initially adds a complete set of potential new rules with very low certainty and allows neural-network training to filter and adjust these rules. Experimental results indicate that the former method results in significantly faster training and produces much simpler refined rule bases with slightly greater accuracy.
    ML ID: 37
  18. Modifying Network Architectures For Certainty-Factor Rule-Base Revision
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    In Proceedings of the International Symposium on Integrating Knowledge and Neural Heuristics (ISIKNH-94), 75--85, Pensacola, FL, May 1994.
    This paper describes RAPTURE --- a system for revising probabilistic rule bases that converts symbolic rules into a connectionist network, which is then trained via connectionist techniques. It uses a modified version of backpropagation to refine the certainty factors of the rule base, and uses ID3's information-gain heuristic (Quinlan) to add new rules. Work is currently under way for finding improved techniques for modifying network architectures that include adding hidden units using the UPSTART algorithm (Frean). A case is made via comparison with fully connected connectionist techniques for keeping the rule base as close to the original as possible, adding new input units only as needed.
    ML ID: 33
  19. A Multistrategy Approach to Theory Refinement
    [Details] [PDF]
    Raymond J. Mooney and Dirk Ourston
    In Ryszard S. Michalski and G. Teccuci, editors, Machine Learning: A Multistrategy Approach, Vol. IV, 141-164, San Mateo, CA, 1994. Morgan Kaufmann.
    This chapter describes a multistrategy system that employs independent modules for deductive, abductive, and inductive reasoning to revise an arbitrarily incorrect propositional Horn-clause domain theory to fit a set of preclassified training instances. By combining such diverse methods, EITHER is able to handle a wider range of imperfect theories than other theory revision systems while guaranteeing that the revised theory will be consistent with the training data. EITHER has successfully revised two actual expert theories, one in molecular biology and one in plant pathology. The results confirm the hypothesis that using a multistrategy system to learn from both theory and data gives better results than using either theory or data alone.
    ML ID: 32
  20. Theory Refinement Combining Analytical and Empirical Methods
    [Details] [PDF]
    Dirk Ourston and Raymond J. Mooney
    Artificial Intelligence:311-344, 1994.
    This article describes a comprehensive approach to automatic theory revision. Given an imperfect theory, the approach combines explanation attempts for incorrectly classified examples in order to identify the failing portions of the theory. For each theory fault, correlated subsets of the examples are used to inductively generate a correction. Because the corrections are focused, they tend to preserve the structure of the original theory. Because the system starts with an approximate domain theory, in general fewer training examples are required to attain a given level of performance (classification accuracy) compared to a purely empirical system. The approach applies to classification systems employing a propositional Horn-clause theory. The system has been tested in a variety of application domains, and results are presented for problems in the domains of molecular biology and plant disease diagnosis.
    ML ID: 31
  21. Extending Theory Refinement to M-of-N Rules
    [Details] [PDF]
    Paul T. Baffes and Raymond J. Mooney
    Informatica, 17:387-397, 1993.
    In recent years, machine learning research has started addressing a problem known as theory refinement. The goal of a theory refinement learner is to modify an incomplete or incorrect rule base, representing a domain theory, to make it consistent with a set of input training examples. This paper presents a major revision of the EITHER propositional theory refinement system. Two issues are discussed. First, we show how run time efficiency can be greatly improved by changing from a exhaustive scheme for computing repairs to an iterative greedy method. Second, we show how to extend EITHER to refine MofN rules. The resulting algorithm, Neither (New EITHER), is more than an order of magnitude faster and produces significantly more accurate results with theories that fit the MofN format. To demonstrate the advantages of NEITHER, we present experimental results from two real-world domains.
    ML ID: 29
  22. Symbolic Revision of Theories With M-of-N Rules
    [Details] [PDF]
    Paul T. Baffes and Raymond J. Mooney
    In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI-93), 1135-1140, Chambery, France, August 1993.
    This paper presents a major revision of the EITHER propositional theory refinement system. Two issues are discussed. First, we show how run time efficiency can be greatly improved by changing from a exhaustive scheme for computing repairs to an iterative greedy method. Second, we show how to extend EITHER to refine M-of-N rules. The resulting algorithm, NEITHER (New EITHER), is more than an order of magnitude faster and produces significantly more accurate results with theories that fit the M-of-N format. To demonstrate the advantages of NEITHER, we present preliminary experimental results comparing it to EITHER and various other systems on refining the DNA promoter domain theory.
    ML ID: 26
  23. Learning to Model Students: Using Theory Refinement to Detect Misconceptions
    [Details] [PDF]
    Paul T. Baffes
    June 1993. Ph.D. proposal, Department of Computer Sciences, University of Texas at Austin.
    A new student modeling system called ASSERT is described which uses domain independent learning algorithms to model unique student errors and to automatically construct bug libraries. ASSERT consists of two learning phases. The first is an application of theory refinement techniques for constructing student models from a correct theory of the domain being tutored. The second learning cycle automatically constructs the bug library by extracting common refinements from multiple student models which are then used to bias future modeling efforts. Initial experimental data will be presented which suggests that ASSERT is a more effective modeling system than other induction techniques previously explored for student modeling, and that the automatic bug library construction significantly enhances subsequent modeling efforts.
    ML ID: 24
  24. Combining Connectionist and Symbolic Learning to Refine Certainty-Factor Rule-Bases
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    Connection Science:339-364, 1993.
    This paper describes Rapture --- a system for revising probabilistic knowledge bases that combines connectionist and symbolic learning methods. Rapture uses a modified version of backpropagation to refine the certainty factors of a Mycin-style rule base and it uses ID3's information gain heuristic to add new rules. Results on refining three actual expert knowledge bases demonstrate that this combined approach generally performs better than previous methods.
    ML ID: 23
  25. Integrating Theory and Data in Category Learning
    [Details] [PDF]
    Raymond J. Mooney
    In G. V. Nakamura and D. L. Medin and R. Taraban, editors, Categorization by Humans and Machines, 189-218, 1993.
    Recent results in both machine learning and cognitive psychology demonstrate that effective category learning involves an integration of theory and data. First, theories can bias induction, affecting what category definitions are extracted from a set of examples. Second, conflicting data can cause theories to be revised. Third, theories can alter the representation of data through feature formation. This chapter reviews two machine learning systems that attempt to integrate theory and data in one or more of these ways. IOU uses a domain theory to acquire part of a concept definition and to focus induction on the unexplained aspects of the data. EITHER uses data to revise an imperfect theory and uses theory to add abstract features to the data. Recent psychological experiments reveal that these machine learning systems exhibit several important aspects of human category learning. Specifically, IOU has been used to successfully model some recent experimental results on the effect of functional knowledge on category learning.
    ML ID: 22
  26. Induction Over the Unexplained: Using Overly-General Domain Theories to Aid Concept Learning
    [Details] [PDF]
    Raymond J. Mooney
    Machine Learning, 10:79-110, 1993.
    This paper describes and evaluates an approach to combining empirical and explanation-based learning called Induction Over the Unexplained (IOU). IOU is intended for learning concepts that can be partially explained by an overly-general domain theory. An eclectic evaluation of the method is presented which includes results from all three major approaches: empirical, theoretical, and psychological. Empirical results shows that IOU is effective at refining overly-general domain theories and that it learns more accurate concepts from fewer examples than a purely empirical approach. The application of theoretical results from PAC learnability theory explains why IOU requires fewer examples. IOU is also shown to be able to model psychological data demonstrating the effect of background knowledge on human learning.
    ML ID: 20
  27. An Operator-Based Approach to First-Order Theory Revision
    [Details] [PDF]
    Bradley Lance Richards
    PhD Thesis, Department of Computer Science, University of Texas at Austin, August 1992.

    Knowledge acquisition is a difficult and time-consuming task, and as error-prone as any human activity. Thus, knowledge bases must be maintained, as errors and omissions are discovered. To address this task, recent learning systems have combined inductive and explanation-based techniques to produce a new class of systems performing theory revision. When errors are discovered in a knowledge base, theory revision allows automatic self-repair, eliminating the need to recall the knowledge engineer and domain expert.

    To date, theory revision systems have been limited to propositional domains. This thesis presents a system, FORTE (First-Order Revision of Theories from Examples), that performs theory revision in first-order domains. Moving to a first-order representation creates many new challenges, such as argument selection and recursion. But is also opens many new application areas, such as logic programming and qualitative modelling, that are beyond the reach of propositional systems.

    ML ID: 278
  28. Using Theory Revision to Model Students and Acquire Stereotypical Errors
    [Details] [PDF]
    Paul T. Baffes and Raymond J. Mooney
    In Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society, 617-622, Bloomington, IN, 1992.
    Student modeling has been identified as an important component to the long term development of Intelligent Computer-Aided Instruction (ICAI) systems. Two basic approaches have evolved to model student misconceptions. One uses a static, predefined library of user bugs which contains the misconceptions modeled by the system. The other uses induction to learn student misconceptions from scratch. Here, we present a third approach that uses a machine learning technique called theory revision. Using theory revision allows the system to automatically construct a bug library for use in modeling while retaining the flexibility to address novel errors.
    ML ID: 16
  29. Combining Symbolic and Neural Learning to Revise Probabilistic Theories
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    In Proceedings of the ML92 Workshop on Integrated Learning in Real Domains, Aberdeen, Scotland, July 1992.
    This paper describes RAPTURE --- a system for revising probabilistic theories that combines symbolic and neural-network learning methods. RAPTURE uses a modified version of backpropagation to refine the certainty factors of a Mycin-style rule-base and it uses ID3's information gain heuristic to add new rules. Results on two real-world domains demonstrate that this combined approach performs as well or better than previous methods.
    ML ID: 14
  30. Automated Debugging of Logic Programs via Theory Revision
    [Details] [PDF]
    Raymond J. Mooney and Bradley L. Richards
    In Proceedings of the Second International Workshop on Inductive Logic Programming (ILP-92), Tokyo, Japan, 1992.
    This paper presents results on using a theory revision system to automatically debug logic programs. FORTE is a recently developed system for revising function-free Horn-clause theories. Given a theory and a set of training examples, it performs a hill-climbing search in an attempt to minimally modify the theory to correctly classify all of the examples. FORTE makes use of methods from propositional theory revision, Horn-clause induction (FOIL), and inverse resolution. The system has has been successfully used to debug logic programs written by undergraduate students for a programming languages course.
    ML ID: 11
  31. Batch versus Incremental Theory Refinement
    [Details] [PDF]
    Raymond J. Mooney
    In Proceedings of the 1992 AAAI Spring Symposium on Knowledge Assimilation, Standford, CA, March 1992.
    Mosts existing theory refinement systems are not incremental. However, any theory refinement system whose input and output theories are compatible can be used to incrementally assimilate data into an evolving theory. This is done by continually feeding its revised theory back in as its input theory. An incremental batch approach, in which the system assimilates a batch of examples at each step, seems most appropriate for existing theory revision systems. Experimental results with the EITHER theory refinement system demonstrate that this approach frequently increases efficiency without significantly decreasing the accuracy or the simplicity of the resulting theory. However, if the system produces bad initial changes to the theory based on only small amount of data, these bad revisions can ``snowball'' and result in an overall decrease in performance.
    ML ID: 9
  32. Using Explanation-Based and Empirical Methods in Theory Revision
    [Details] [PDF]
    Dirk Ourston
    PhD Thesis, Department of Computer Science, University of Texas at Austin, 1991.

    The knowledge acquisition problem is a continuing problem in expert system development. The knowledge base (domain theory) initially formulated by the expert is usually only an approximation to the correct theory for the application domain. This initial knowledge base must be refined (usually manually) as problems are discovered. This research addresses the knowledge base refinement problem for classification tasks. The research provides an automatic method for correcting a domain theory in the light of incorrect performance on a set of training examples. The method uses attempted explanations to focus the correction on the failing part of the knowledge base. It then uses induction to supply a correction to the knoweledge base that will render it consistent with the training examples

    Using this technique, it is possible to correct overly general and overly specific theories, theories with multiple faults at various levels in the theory hierarchy, and theories involving multiple concepts. Methods have been developed for making corrections even in the presence of noisy data. Theoretical justification for the method is given in the form of convergence results that predict that the method will eventually converge to a hypothesis that is within a small error of the correct hypothesis, given sufficient examples. Because the technique currently relies on theorem proving for much of the analysis, it is quite expensive to computationally and heuristic methods for reducing the computational burden have been implemented.

    The system developed as part of the research is called EITHER (Explanation-based Inductive THeory Extension and Revision). EITHER uses propositional Horn clause logic as its knowledge representation, with examples expressed as attribute-value lists .The system has been tested in a variety of domains including revising a theory for the identification of promoters in DNA sequences and a theory for soybean disease diagnosis, where it has been shown to outperform a purely inductive approach.

    ML ID: 277
  33. First-Order Theory Revision
    [Details] [PDF]
    Bradley L. Richards and Raymond J. Mooney
    In Proceedings of the Eighth International Machine Learning Workshop, pp. 447-451, Evanston, IL, June 1991.
    Recent learning systems have combined explanation-based and inductive learning techniques to revise propositional domain theories (e.g. EITHER, RTLS, KBANN). Inductive systems working in first order logic have also been developed (e.g. CIGOL, FOIL, FOCL). This paper presents a theory revision system, Forte, that merges these two developments. Forte provides theory revision capabilities similar to those of the propositional systems, but works with domain theories stated in first-order logic.
    ML ID: 256
  34. Improving Shared Rules in Multiple Category Domain Theories
    [Details] [PDF]
    Dirk Ourston and Raymond J. Mooney
    In Proceedings of the Eighth International Workshop on Machine Learning, 534-538, Evanston, IL, June 1991.
    This paper presents an approach to improving the classification performance of a multiple category theory by correcting intermediate rules which are shared among the categories. Using this technique, the performance of a theory in one category can be improved through training in an entirely different category. Examples of the technique are presented and experimental results are given.
    ML ID: 6
  35. Constructive Induction in Theory Refinement
    [Details] [PDF]
    Raymond J. Mooney and Dirk Ourston
    In Proceedings of the Eighth International Workshop on Machine Learning, 178-182, Evanston, IL, June 1991.
    This paper presents constructive induction techniques recently added to the EITHER theory refinement system. These additions allow EITHER to handle arbitrary gaps at the ``top,'' ``middle,'' and/or ``bottom'' of an incomplete domain theory. Intermediate concept utilization employs existing rules in the theory to derive higher-level features for use in induction. Intermediate concept creation employs inverse resolution to introduce new intermediate concepts in order to fill gaps in a theory that span multiple levels. These revisions allow EITHER to make use of imperfect domain theories in the ways typical of previous work in both constructive induction and theory refinement. As a result, EITHER is able to handle a wider range of theory imperfections than does any other existing theory refinement system.
    ML ID: 5
  36. Theory Refinement with Noisy Data
    [Details] [PDF]
    Raymond J. Mooney and Dirk Ourston
    Technical Report AI91-153, Artificial Intelligence Laboratory, University of Texas, Austin, TX, March 1991.
    This paper presents a method for revising an approximate domain theory based on noisy data. The basic idea is to avoid making changes to the theory that account for only a small amount of data. This method is implemented in the EITHER propositional Horn-clause theory revision system. The paper presents empirical results on artificially corrupted data to show that this method successfully prevents over-fitting. In other words, when the data is noisy, performance on novel test data is considerably better than revising the theory to completely fit the data. When the data is not noisy, noise processing causes no significant degradation in performance. Finally, noise processing increases efficiency and decreases the complexity of the resulting theory.
    ML ID: 4
  37. Changing the Rules: A Comprehensive Approach to Theory Refinement
    [Details] [PDF]
    D. Ourston and Raymond J. Mooney
    In Proceedings of the Eighth National Conference on Artificial Intelligence (AAAI-90), 815-820, Boston, MA, July 1990.
    This paper presents a comprehensive approach to automatic theory refinement. In contrast to other systems, the approach is capable of modifying a theory which contains multiple faults and faults which occur at intermediate points in the theory. The approach uses explanations to focus the corrections to the theory, with the corrections being supplied by an inductive component. In this way, an attempt is made to preserve the structure of the original theory as much as possible. Because the approach begins with an approximate theory, learning an accurate theory takes fewer examples than a purely inductive system. The approach has application in expert system development, where an initial, approximate theory must be refined. The approach also applies at any point in the expert system lifecycle when the expert system generates incorrect results. The approach has been applied to the domain of molecular biology and shows significantly better results then a purely inductive learner.
    ML ID: 3