Growing Layers of Perceptrons: Introducing the Extentron Algorithm (1992)
The ideas presented here are based on two observations of perceptrons: (1) when the perceptron learning algorithm cycles among hyperplanes, the hyperplanes may be compared to select one that gives a best SPLIT of the examples, an d (2) it is always possible for the perceptron to build a hyperplane that separates at least one example from all the rest. We describe the Extentron, which grows multi-layer networks capable of distinguishing non-linearly-separable data using the simple perceptron rule for linear threshold units. The resulting algorithm is simple, very fast, scales well to large problems, retains the convergence properties of the perceptron, and can be completely specified using only two parameters. Results are presented comparing the Extentron to other neural network paradigms and to symbolic learning systems.
View:
PDF, PS
Citation:
In Proceedings of the 1992 International Joint Conference on Neural Networks, pp. 392--397, Baltimore, MD, June 1992.
Bibtex:

Paul Baffes Ph.D. Alumni
John M. Zelle Ph.D. Alumni john zelle [at] wartburg edu