Department of Computer Science

Machine Learning Research Group

University of Texas at Austin Artificial Intelligence Lab

Publications: Neural-Symbolic Learning

Neural networks and symbolic learning techniques can be seen as operating at different levels of abstraction. Our work focuses on understanding differences between their capabilities, and on combining their strengths.
  1. Combining Symbolic and Connectionist Learning Methods to Refine Certainty-Factor Rule-Bases
    [Details] [PDF]
    J. Jeffrey Mahoney
    PhD Thesis, Department of Computer Sciences, University of Texas at Austin, May 1996. 113 pages.
    This research describes the system RAPTURE, which is designed to revise rule bases expressed in certainty-factor format. Recent studies have shown that learning is facilitated when biased with domain-specific expertise, and have also shown that many real-world domains require some form of probabilistic or uncertain reasoning in order to successfully represent target concepts. RAPTURE was designed to take advantage of both of these results.

    Beginning with a set of certainty-factor rules, along with accurately-labelled training examples, RAPTURE makes use of both symbolic and connectionist learning techniques for revising the rules, in order that they correctly classify all of the training examples. A modified version of backpropagation is used to adjust the certainty factors of the rules, ID3's information-gain heuristic is used to add new rules, and the Upstart algorithm is used to create new hidden terms in the rule base.

    Results on refining four real-world rule bases are presented that demonstrate the effectiveness of this combined approach. Two of these rule bases were designed to identify particular areas in strands of DNA, one is for identifying infectious diseases, and the fourth attempts to diagnose soybean diseases. The results of RAPTURE are compared with those of backpropagation, C4.5, KBANN, and other learning systems. RAPTURE generally produces sets of rules that are more accurate that these other systems, often creating smaller sets of rules and using less training time.

    ML ID: 61
  2. Revising Bayesian Network Parameters Using Backpropagation
    [Details] [PDF]
    Sowmya Ramachandran and Raymond J. Mooney
    In Proceedings of the International Conference on Neural Networks (ICNN-96), Special Session on Knowledge-Based Artificial Neural Networks, 82--87, Washington DC, June 1996.
    The problem of learning Bayesian networks with hidden variables is known to be a hard problem. Even the simpler task of learning just the conditional probabilities on a Bayesian network with hidden variables is hard. In this paper, we present an approach that learns the conditional probabilities on a Bayesian network with hidden variables by transforming it into a multi-layer feedforward neural network (ANN). The conditional probabilities are mapped onto weights in the ANN, which are then learned using standard backpropagation techniques. To avoid the problem of exponentially large ANNs, we focus on Bayesian networks with noisy-or and noisy-and nodes. Experiments on real world classification problems demonstrate the effectiveness of our technique.
    ML ID: 58
  3. Refinement of Bayesian Networks by Combining Connectionist and Symbolic Techniques
    [Details] [PDF]
    Sowmya Ramachandran
    1995. Unpublished Ph.D. Thesis Proposal.
    Bayesian networks provide a mathematically sound formalism for representing and reasoning with uncertain knowledge and are as such widely used. However, acquiring and capturing knowledge in this framework is difficult. There is a growing interest in formulating techniques for learning Bayesian networks inductively. While the problem of learning a Bayesian network, given complete data, has been explored in some depth, the problem of learning networks with unobserved causes is still open. In this proposal, we view this problem from the perspective of theory revision and present a novel approach which adapts techniques developed for revising theories in symbolic and connectionist representations. Thus, we assume that the learner is given an initial approximate network (usually obtained from a expert). Our technique inductively revises the network to fit the data better. Our proposed system has two components: one component revises the parameters of a Bayesian network of known structure, and the other component revises the structure of the network. The component for parameter revision maps the given Bayesian network into a multi-layer feedforward neural network, with the parameters mapped to weights in the neural network, and uses standard backpropagation techniques to learn the weights. The structure revision component uses qualitative analysis to suggest revisions to the network when it fails to predict the data accurately. The first component has been implemented and we will present results from experiments on real world classification problems which show our technique to be effective. We will also discuss our proposed structure revision algorithm, our plans for experiments to evaluate the system, as well as some extensions to the system.
    ML ID: 51
  4. Comparing Methods For Refining Certainty Factor Rule-Bases
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    In Proceedings of the Eleventh International Workshop on Machine Learning (ML-94), 173--180, Rutgers, NJ, July 1994.
    This paper compares two methods for refining uncertain knowledge bases using propositional certainty-factor rules. The first method, implemented in the RAPTURE system, employs neural-network training to refine the certainties of existing rules but uses a symbolic technique to add new rules. The second method, based on the one used in the KBANN system, initially adds a complete set of potential new rules with very low certainty and allows neural-network training to filter and adjust these rules. Experimental results indicate that the former method results in significantly faster training and produces much simpler refined rule bases with slightly greater accuracy.
    ML ID: 37
  5. Modifying Network Architectures For Certainty-Factor Rule-Base Revision
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    In Proceedings of the International Symposium on Integrating Knowledge and Neural Heuristics (ISIKNH-94), 75--85, Pensacola, FL, May 1994.
    This paper describes RAPTURE --- a system for revising probabilistic rule bases that converts symbolic rules into a connectionist network, which is then trained via connectionist techniques. It uses a modified version of backpropagation to refine the certainty factors of the rule base, and uses ID3's information-gain heuristic (Quinlan) to add new rules. Work is currently under way for finding improved techniques for modifying network architectures that include adding hidden units using the UPSTART algorithm (Frean). A case is made via comparison with fully connected connectionist techniques for keeping the rule base as close to the original as possible, adding new input units only as needed.
    ML ID: 33
  6. Combining Connectionist and Symbolic Learning to Refine Certainty-Factor Rule-Bases
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    Connection Science:339-364, 1993.
    This paper describes Rapture --- a system for revising probabilistic knowledge bases that combines connectionist and symbolic learning methods. Rapture uses a modified version of backpropagation to refine the certainty factors of a Mycin-style rule base and it uses ID3's information gain heuristic to add new rules. Results on refining three actual expert knowledge bases demonstrate that this combined approach generally performs better than previous methods.
    ML ID: 23
  7. Combining Symbolic and Neural Learning to Revise Probabilistic Theories
    [Details] [PDF]
    J. Jeffrey Mahoney and Raymond J. Mooney
    In Proceedings of the ML92 Workshop on Integrated Learning in Real Domains, Aberdeen, Scotland, July 1992.
    This paper describes RAPTURE --- a system for revising probabilistic theories that combines symbolic and neural-network learning methods. RAPTURE uses a modified version of backpropagation to refine the certainty factors of a Mycin-style rule-base and it uses ID3's information gain heuristic to add new rules. Results on two real-world domains demonstrate that this combined approach performs as well or better than previous methods.
    ML ID: 14
  8. Growing Layers of Perceptrons: Introducing the Extentron Algorithm
    [Details] [PDF]
    Paul T. Baffes and John M. Zelle
    In Proceedings of the 1992 International Joint Conference on Neural Networks, 392--397, Baltimore, MD, June 1992.
    The ideas presented here are based on two observations of perceptrons: (1) when the perceptron learning algorithm cycles among hyperplanes, the hyperplanes may be compared to select one that gives a best SPLIT of the examples, an d (2) it is always possible for the perceptron to build a hyperplane that separates at least one example from all the rest. We describe the Extentron, which grows multi-layer networks capable of distinguishing non-linearly-separable data using the simple perceptron rule for linear threshold units. The resulting algorithm is simple, very fast, scales well to large problems, retains the convergence properties of the perceptron, and can be completely specified using only two parameters. Results are presented comparing the Extentron to other neural network paradigms and to symbolic learning systems.
    ML ID: 12
  9. Symbolic and Neural Learning Algorithms: An Experimental Comparison
    [Details] [PDF]
    J.W. Shavlik, Raymond J. Mooney and G. Towell
    Machine Learning, 6:111-143, 1991. Reprinted in {it Readings in Knowledge Acquisition and Learning}, Bruce G. Buchanan and David C. Wilkins (eds.), Morgan Kaufman, San Mateo, CA, 1993..
    Despite the fact that many symbolic and neural network (connectionist) learning algorithms address the same problem of learning from classified examples, very little is known regarding their comparative strengths and weaknesses. Experiments comparing the ID3 symbolic learning algorithm with the perception and backpropagation neural learning algorithms have been performed using five large, real-world data sets. Overall, backpropagation performs slightly better than the other two algorithms in terms of classification accuracy on new examples, but takes much longer to train. Experimental results suggest that backpropagation can work significantly better on data sets containing numerical data. Also analyzed empirically are the effects of (1) the amount of training data, (2) imperfect training examples, and (3) the encoding of the desired outputs. Backpropagation occasionally outperforms the other two systems when given relatively small amounts of training data. It is slightly more accurate than ID3 when examples are noisy or incompletely specified. Finally, backpropagation more effectively utilizes a distributed output encoding.
    ML ID: 8
  10. Processing Issues in Comparisons of Symbolic and Connectionist Learning Systems
    [Details] [PDF]
    Douglas Fisher and Kathleen McKusick and Raymond J. Mooney and Jude W. Shavlik and Geoffrey Towell
    In Proceedings of the Sixth International Workshop on Machine Learning, 169--173, Ithaca, New York, 1989.
    Symbolic and connectionist learning strategies are receiving much attention. Comparative studies should qualify the advantages of systems from each paradigm. However, these systems make differing assumptions along several dimensions, thus complicating the design of 'fair' experimental comparisons. This paper describes our comparative studies of ID3 and back-propagation and suggests experimental dimensions that may be useful in cross-paradigm experimental design.
    ML ID: 275
  11. An Experimental Comparison of Symbolic and Connectionist Learning Algorithms
    [Details] [PDF]
    Raymond J. Mooney, J.W. Shavlik, G. Towell and A. Gove
    In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence (IJCAI-89), 775-780, Detroit, MI, August 1989. Reprinted in ``Readings in Machine Learning'', Jude W. Shavlik and T. G. Dietterich (eds.), Morgan Kaufman, San Mateo, CA, 1990..
    Despite the fact that many symbolic and connectionist (neural net) learning algorithms are addressing the same problem of learning from classified examples, very little is known regarding their comparative strengths and weaknesses. This paper presents the results of experiments comparing the ID3 symbolic learning algorithm with the perceptron and back-propagation connectionist learning algorithms on several large real-world data sets. The results show that ID3 and perceptron run significantly faster than does back-propagation, both during learning and during classification of novel examples. However, the probability of correctly classifying new examples is about the same for the three systems. On noisy data sets there is some indication that back-propagation classifies more accurately.
    ML ID: 211