Department of Computer Science

Machine Learning Research Group

University of Texas at Austin Artificial Intelligence Lab

Publications: 1992

  1. An Operator-Based Approach to First-Order Theory Revision
    [Details] [PDF]
    Bradley Lance Richards
    PhD Thesis, Department of Computer Science, University of Texas at Austin, August 1992.

    Knowledge acquisition is a difficult and time-consuming task, and as error-prone as any human activity. Thus, knowledge bases must be maintained, as errors and omissions are discovered. To address this task, recent learning systems have combined inductive and explanation-based techniques to produce a new class of systems performing theory revision. When errors are discovered in a knowledge base, theory revision allows automatic self-repair, eliminating the need to recall the knowledge engineer and domain expert.

    To date, theory revision systems have been limited to propositional domains. This thesis presents a system, FORTE (First-Order Revision of Theories from Examples), that performs theory revision in first-order domains. Moving to a first-order representation creates many new challenges, such as argument selection and recursion. But is also opens many new application areas, such as logic programming and qualitative modelling, that are beyond the reach of propositional systems.

    ML ID: 278
  2. A General Abductive system with application to plan recognition and diagnosis
    [Details] [PDF]
    Hwee Tou Ng
    PhD Thesis, Department of Computer Sciences, University of Texas at Austin, May 1992. 154 pages.
    A diverse set of intelligent activities, including natural language understanding, and scientific theory formation, requires the ability to construct explanations for observed phoenomena. In this thesis, we view explanation as abduction, where an abductive explanation is a consistent set of assumptions which, together with background knowledge, logically entials a set of observations.
    To explore the practical feasibility of such a general abductive approach to explanation, we have successfully built a domain-independent system called ACCEL. In our system, knowledge about a variety of domains in uniformly encoded in first-order Horn-clause axioms. A general-purpose abduction algorithm, AAA, efficiently constructs explanations in the various domians by caching partial explanations to avoid redundant work. Empirical results show that caching of partial explanations can achieve more than an order of magnitude speedup in run time. We have applied our abductive system to two general tasks: plan recognition in text understanding, and diagnosis of medical diseases, logic circuits, and dynamic systems. The results indicate that ACCEL is a general-purpose system capable of plan recognition and diagnosis, yet efficient enough to be of pratical utility.
    In the plan recognition domain, we defined a novel evaluation criterion, called explanatory coherence, and tested ACCEL on 50 short narrative texts. Empirical results demonstrate that coherence is a better evaluation metric than simplicity in the plan recognition domain, and that our system is sufficiently general to be able to handle similar plan recognition problems not known to the system developer in advance.
    In medical diagnosis, we prove that ACCEL computes the same diagnoses as the GSC model of Reggia, and present empirical results demonstrating the efficiency of ACCEL in diagnosing 50 real-world patient cases using a sizable knowledge base with over six hundred symptom-disease rules.
    ACCEL also realizes model-based diagnosis, which concerns inferring faults form first principles given knowledge about the correct structure and behavior of a system. ACCEL has successfully diagnosed logic circuits (a full adder) and dynamic systems (a proportional temperature controller and the water balance system of the human kidney).
    ML ID: 230
  3. Schema acquisition from a single example
    [Details] [PDF]
    W. Ahn, W. F. Brewer and Raymond J. Mooney
    Journal of Experimental Psychology: Learning, Memory, and Cognition, 18:391-412, 1992.
    This study compares similarity-based learning (SBL) and explanation-based learning (EBL) approaches to schema acquisition. In SBL approaches, concept formation is based on similarity across multiple examples. However, these approaches seem to be appropriate when the learner cannot apply existing knowledge and when the concepts to be learned are nonexplanatory. EBL approaches assume that a schema can be acquired from even a single example by constructing an explanation of the example using background knowledge, and generalizing the resulting explanation. However, unlike the current EBL theories, Exp 1 showed significant EBL occurred only when the background information learned during the experiment was actively used by the Ss. Exp 2 showed the generality of EBL mechanisms across a variety of materials and test procedures.
    ML ID: 212
  4. Abductive Plan Recognition and Diagnosis: A Comprehensive Empirical Evaluation
    [Details] [PDF]
    Hwee Tou Ng and Raymond J. Mooney
    In Proceedings of the Third International Conference on Principles of Knowledge Representation and Reasoning, 499--508, Cambridge, MA, October 1992.
    While it has been realized for quite some time within AI that abduction is a general model of explanation for a variety of tasks, there have been no empirical investigations into the practical feasibility of a general, logic-based abductive approach to explanation. In this paper we present extensive empirical results on applying a general abductive system, ACCEL, to moderately complex problems in plan recognition and diagnosis. In plan recognition, ACCEL has been tested on 50 short narrative texts, inferring characters' plans from actions described in a text. In medical diagnosis, ACCEL has diagnosed 50 real-world patient cases involving brain damage due to stroke (previously addressed by set-covering methods). ACCEL also uses abduction to accomplish model-based diagnosis of logic circuits (a full adder) and continuous dynamic systems (a temperature controller and the water balance system of the human kidney). The results indicate that general purpose abduction is an effective and efficient mechanism for solving problems in plan recognition and diagnosis.
    ML ID: 19
  5. Speeding-up Logic Programs by Combining EBG and FOIL
    [Details] [PDF]
    John M. Zelle and Raymond J. Mooney
    In Proceedings of the 1992 Machine Learning Workshop on Knowledge Compilation and Speedup Learning, Aberdeen, Scotland, July 1992.
    This paper presents an algorithm that combines traditional EBL techniques and recent developments in inductive logic programming to learn effective clause selection rules for Prolog programs. When these control rules are incorporated into the original program, significant speed-up may be achieved. The algorithm produces not only EBL-like speed up of problem solvers, but is capable of automatically transforming some intractable algorithms into ones that run in polynomial time.
    ML ID: 18
  6. Automatic Abduction of Qualitative Models
    [Details] [PDF]
    Bradley L. Richards, Ina Kraan, and Benjamin J. Kuipers
    In Proceedings of the Fifth International Workshop on Qualitative Reasoning about Physical Systems, 295-301, 1992.
    We describe a method of automatically abducing qualitative models from descriptions of behaviors. We generate, from either quantitative or qualitative data, models in the form of qualitative differential equations suitable for use by QSIM. Constraints are generated and filtered both by comparison with the input behaviors and by dimensional analysis. If the user provides complete information on the input behaviors and the dimensions of the input variables, the resulting model is unique, maximally constrainted, and guaranteed to reproduce the input behaviors. If the user provides incomplete information, our method will still generate a model which reproduces the input behaviors, but the model may no longer be unique. Incompleteness can take several forms: missing dimensions, values of variables, or entire variables.
    ML ID: 17
  7. Using Theory Revision to Model Students and Acquire Stereotypical Errors
    [Details] [PDF]
    Paul T. Baffes and Raymond J. Mooney
    In Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society, 617-622, Bloomington, IN, 1992.
    Student modeling has been identified as an important component to the long term development of Intelligent Computer-Aided Instruction (ICAI) systems. Two basic approaches have evolved to model student misconceptions. One uses a static, predefined library of user bugs which contains the misconceptions modeled by the system. The other uses induction to learn student misconceptions from scratch. Here, we present a third approach that uses a machine learning technique called theory revision. Using theory revision allows the system to automatically construct a bug library for use in modeling while retaining the flexibility to address novel errors.
    ML ID: 16
  8. Learning Relations by Pathfinding
    [Details] [PDF]
    Bradley L. Richards and Raymond J. Mooney
    In Proceedings of the Tenth National Conference on Artificial Intelligence (AAAI-92), 50-55, San Jose, CA, July 1992.
    First-order learning systems (e.g., FOIL, FOCL, FORTE) generally rely on hill-climbing heuristics in order to avoid the combinatorial explosion inherent in learning first-order concepts. However, hill-climbing leaves these systems vulnerable to local maxima and local plateaus. We present a method, called relational pathfinding, which has proven highly effective in escaping local maxima and crossing local plateaus. We present our algorithm and provide learning results in two domains: family relationships and qualitative model building.
    ML ID: 15
  9. Combining Symbolic and Neural Learning to Revise Probabilistic Theories
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    In Proceedings of the ML92 Workshop on Integrated Learning in Real Domains, Aberdeen, Scotland, July 1992.
    This paper describes RAPTURE --- a system for revising probabilistic theories that combines symbolic and neural-network learning methods. RAPTURE uses a modified version of backpropagation to refine the certainty factors of a Mycin-style rule-base and it uses ID3's information gain heuristic to add new rules. Results on two real-world domains demonstrate that this combined approach performs as well or better than previous methods.
    ML ID: 14
  10. A First-Order Horn-Clause Abductive System and Its Use in Plan Recognition and Diagnosis
    [Details] [PDF]
    Hwee Tou Ng and Raymond J. Mooney
    June 1992. Unpublished Technical Note.
    A diverse set of intelligent activities, including natural language understanding and diagnosis, requires the ability to construct explanations for observed phenomena. In this paper, we view explanation as abduction, where an abductive explanation is a consistent set of assumptions which, together with background knowledge, logically entails a set of observations. We have successfully built a domain-independent system, ACCEL, in which knowledge about a variety of domains is uniformly encoded in first-order Horn-clause axioms. A general-purpose abduction algorithm, AAA, efficiently constructs explanations in the various domains by caching partial explanations to avoid redundant work. Empirical results show that caching of partial explanations can achieve more than an order of magnitude speedup in run time. We have applied our abductive system to two general tasks: plan recognition in text understanding, and diagnosis of medical diseases, logic circuits, and dynamic systems. The results indicate that ACCEL is a general-purpose system capable of plan recognition and diagnosis, yet efficient enough to be of practical utility.
    ML ID: 13
  11. Growing Layers of Perceptrons: Introducing the Extentron Algorithm
    [Details] [PDF]
    Paul T. Baffes and John M. Zelle
    In Proceedings of the 1992 International Joint Conference on Neural Networks, 392--397, Baltimore, MD, June 1992.
    The ideas presented here are based on two observations of perceptrons: (1) when the perceptron learning algorithm cycles among hyperplanes, the hyperplanes may be compared to select one that gives a best SPLIT of the examples, an d (2) it is always possible for the perceptron to build a hyperplane that separates at least one example from all the rest. We describe the Extentron, which grows multi-layer networks capable of distinguishing non-linearly-separable data using the simple perceptron rule for linear threshold units. The resulting algorithm is simple, very fast, scales well to large problems, retains the convergence properties of the perceptron, and can be completely specified using only two parameters. Results are presented comparing the Extentron to other neural network paradigms and to symbolic learning systems.
    ML ID: 12
  12. Automated Debugging of Logic Programs via Theory Revision
    [Details] [PDF]
    Raymond J. Mooney and Bradley L. Richards
    In Proceedings of the Second International Workshop on Inductive Logic Programming (ILP-92), Tokyo, Japan, 1992.
    This paper presents results on using a theory revision system to automatically debug logic programs. FORTE is a recently developed system for revising function-free Horn-clause theories. Given a theory and a set of training examples, it performs a hill-climbing search in an attempt to minimally modify the theory to correctly classify all of the examples. FORTE makes use of methods from propositional theory revision, Horn-clause induction (FOIL), and inverse resolution. The system has has been successfully used to debug logic programs written by undergraduate students for a programming languages course.
    ML ID: 11
  13. Belief Revision in the Context of Abductive Explanation
    [Details] [PDF]
    Siddarth Subramanian
    Technical Report AI92-179, Artificial Intelligence Laboratory, University of Texas, Austin, TX, December 1992.
    This proposal presents an approach to explanation that incorporates the paradigms of belief revision and abduction. We present an algorithm that combines these techniques and a system called BRACE that is a preliminary implementation of this algorithm. We show the applicability of the BRACE approach to a wide range of domains including scientific discovery, device diagnosis and plan recognition. Finally, we describe our proposals for a new implementation, new application domains for our system and extensions to this approach.
    ML ID: 10
  14. Batch versus Incremental Theory Refinement
    [Details] [PDF]
    Raymond J. Mooney
    In Proceedings of the 1992 AAAI Spring Symposium on Knowledge Assimilation, Standford, CA, March 1992.
    Mosts existing theory refinement systems are not incremental. However, any theory refinement system whose input and output theories are compatible can be used to incrementally assimilate data into an evolving theory. This is done by continually feeding its revised theory back in as its input theory. An incremental batch approach, in which the system assimilates a batch of examples at each step, seems most appropriate for existing theory revision systems. Experimental results with the EITHER theory refinement system demonstrate that this approach frequently increases efficiency without significantly decreasing the accuracy or the simplicity of the resulting theory. However, if the system produces bad initial changes to the theory based on only small amount of data, these bad revisions can ``snowball'' and result in an overall decrease in performance.
    ML ID: 9