Two results that have emerged from this research suggest how visual systems may learn to represent visual depth information. First, a visual system can learn a nonmetric representation of the depth relations arising from occlusion events. The existence of separate channels for representing modal and amodal information provides a substrate upon which the network can learn associations between the disappearance of moving objects and the presence of occluders.
Second, parallel opponent On and Off channels that represent both modal and amodal stimuli can also be learned through the same process. These channels let visual systems represent exceptions to some predictive rules -- e.g., to represent an occlusion event as an exception (the unexpected failure to appear) to a predicted event (the appearance of an object at a certain location). The On and Off channels thus improve the ability of visual systems to model, predict, and represent the visual appearance of the world.