Construction of the Object Semantic Hierarchy (2009)
An intelligent robot must be able to perceive and reason robustly about its world in terms of objects, among other foundational concepts. The robot can draw on rich data for object perception from continuous sensory input, in contrast to the usual formulation that focuses on objects in isolated still images. Additionally, the robot needs multiple object representations to deal with different tasks and/or different classes of objects. We present the Object Semantic Hierarchy (OSH), which consists of multiple representations with different ontologies. The OSH factors the problems of object perception so that intermediate states of knowledge about an object have natural representations, with relatively easy transitions from less detailed to more detailed representations. Each layer in the hierarchy builds an explanation of the sensory input stream, in terms of a stochastic model consisting of a deterministic model and an unexplained "noise" term. Each layer is constructed by identifying invariants to reduce the previous layer's noise term. In the final model, the scene is explained in terms of constant background and object models, and low-dimensional pose trajectories of the observer and the dynamic objects. The object representations in the OSH range from 2D regions, to 2D planar components with 3D poses, to structured 3D models of objects. This paper presents the Object Semantic Hierarchy in detail, describes the current implementation, and presents evaluation results.
In Fifth International Cognitive Vision Workshop (ICVW-09) 2009.

Benjamin Kuipers Formerly affiliated Faculty kuipers [at] cs utexas edu
Changhai Xu Ph.D. Alumni changhai [at] cs utexas edu