Motion Segmentation by Learning Homography Matrices from Motor Signals (2011)
Motion information is an important cue for a robot to separate foreground moving objects from the static background world. Based on the observation that the motion of the background (from the robot's egocentric view) has stronger correlation to the robot's motor signals than the motion of foreground objects, we propose a novel method to detect foreground moving objects by clustering image features according to their motion consistency with motor signals. Corner/edge features are detected and tracked across adjacent frames. The errors between the estimated feature locations based on motor signals and their actual tracked locations are calculated. The features are clustered into background/foreground using Expectation-Maximization on these errors. Labeled features are then used for pixel-level image segmentation with the Active Contours and Graph-based Transduction techniques. Unlike pixel-level background subtraction methods, the proposed approach does not require a large number of frames for background model construction, and does not suffer from accumulated image registration error for dynamic cameras. In contrast to existing sparse feature based foreground/background separation methods, our approach clusters features in only one dimensional space instead of a higher dimensional space, and there is no need to search for parameters in an affine or homography transformation space or motion trajectory space.
View:
PDF
Citation:
Canadian Conference on Computer and Robot Vision (CRV-11) (2011).
Bibtex:

Benjamin Kuipers Formerly affiliated Faculty kuipers [at] cs utexas edu
Changhai Xu Ph.D. Alumni changhai [at] cs utexas edu