next up previous
Next: Predictivity Implies Prediction Up: A Self-Organizing Neural Network Previous: Introduction: Perception of

The Predictivity Principle: A Heuristic for Choosing Learning Rules

Our analysis is derived from the following visual predictivity principle, which we postulate as a fundamental principle of neural organization in visual systems: Visual systems represent the world in terms of predictions of its future appearance, and they reorganize themselves to generate better predictions. If the predictivity principle were satisfied (i.e., a visual system generates perfect predictions of the appearance of its view of the world, down to the last image detail), then clearly we could infer that the visual system possessed an excellent representation or model of the actual state of the visual world. Albus [1] described a version of a predictivity principle, in which differences between predicted and observed input can lead to updates of an internal world model, so that better predictions can be generated later.

The predictivity principle asserts that visual systems should represent the world in terms of predictions. For instance, whenever an object is detected as moving in a certain direction, its motion should be represented in terms of a prediction that after a given delay, the object will appear at a certain location, farther along the motion trajectory.

Marshall [7,8] describes how a neural network model can learn to represent visual motion sequences in terms of prediction signals transmitted along time-delayed lateral excitatory connections. The outputs of any time-delayed connection can be considered a prediction, since these connections transmit information about past scenes to the neurons processing the present sensory inputs.

The predictivity principle further asserts that whenever a prediction fails, a learning rule should be triggered to alter the visual system's world model so that a better prediction would be generated if the same scene were to arise again. For instance, suppose the prediction that the moving object will appear at a certain location fails (e.g., because the object moves behind an occluder). The visual system's motion model should then be altered so that it would have predicted more accurately the disappearance of the object if the same scene were replayed.


next up previous
Next: Predictivity Implies Prediction Up: A Self-Organizing Neural Network Previous: Introduction: Perception of