Department of Computer Science

Machine Learning Research Group

University of Texas at Austin Artificial Intelligence Lab

Publications: 1989

  1. Processing Issues in Comparisons of Symbolic and Connectionist Learning Systems
    [Details] [PDF]
    Douglas Fisher and Kathleen McKusick and Raymond J. Mooney and Jude W. Shavlik and Geoffrey Towell
    In Proceedings of the Sixth International Workshop on Machine Learning, 169--173, Ithaca, New York, 1989.
    Symbolic and connectionist learning strategies are receiving much attention. Comparative studies should qualify the advantages of systems from each paradigm. However, these systems make differing assumptions along several dimensions, thus complicating the design of 'fair' experimental comparisons. This paper describes our comparative studies of ID3 and back-propagation and suggests experimental dimensions that may be useful in cross-paradigm experimental design.
    ML ID: 275
  2. An Experimental Comparison of Symbolic and Connectionist Learning Algorithms
    [Details] [PDF]
    Raymond J. Mooney, J.W. Shavlik, G. Towell and A. Gove
    In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence (IJCAI-89), 775-780, Detroit, MI, August 1989. Reprinted in ``Readings in Machine Learning'', Jude W. Shavlik and T. G. Dietterich (eds.), Morgan Kaufman, San Mateo, CA, 1990..
    Despite the fact that many symbolic and connectionist (neural net) learning algorithms are addressing the same problem of learning from classified examples, very little is known regarding their comparative strengths and weaknesses. This paper presents the results of experiments comparing the ID3 symbolic learning algorithm with the perceptron and back-propagation connectionist learning algorithms on several large real-world data sets. The results show that ID3 and perceptron run significantly faster than does back-propagation, both during learning and during classification of novel examples. However, the probability of correctly classifying new examples is about the same for the three systems. On noisy data sets there is some indication that back-propagation classifies more accurately.
    ML ID: 211
  3. The Effect of Rule Use on the Utility of Explanation-Based Learning
    [Details] [PDF]
    Raymond J. Mooney
    In Proceedings of the 11th International Joint Conference on Artificial Intelligence, 725-730, 1989. San Francisco, CA: Morgan Kaufmann.
    The utility problem in explanation-based learning concerns the ability of learned rules or plans to actually improve the performance of a problem solving system. Previous research on this problem has focused on the amount, content, or form of learned information. This paper examines the effect of the use of learned information on performance. Experiments and informal analysis show that unconstrained use of learned rules eventually leads to degraded performance. However, constraining the use of learned rules helps avoid the negative effect of learning and lead to overall performance improvement. Search strategy is also shown to have a substantial effect on the contribution of learning to performance by affecting the manner in which learned rules are used. These effects help explain why previous experiments have obtained a variety of different results concerning the impact of explanation-based learning on performance.
    ML ID: 210