Person Tracking on Mobile Robots
Tracking a particular person among multiple persons is a fundamental, yet challenging task for a mobile robot that interacts with humans. Various features of the person of interest such as his/her face or clothing need to be learned online and recognized in real-time. We present methods our home-assistant robot uses to robustly track a person.
Robust Real-Time Face Recognition
Real-time face detector
Extracted SIFT features
Matched SIFT features between same and different people
Heterogeneous Inter-Classifier Feedback
An overall classifier is built up from two or more heterogeneous sub-classifiers. We divide the characteristics into two groups: primary and secondary. The primary characteristic must be unique, but its classification may be computationally expensive, or susceptible to noisy input data. The secondary characteristic that may be ambiguous, but computationally less expensive and more robust with respect to noise, can be introduced to leverage the shortcomings of a classification solely based on primary characteristics and act as a fall-back classifier. By dividing a common classification problem into multiple classification problems, we enable inter-characteristic feedback to each other which can be used to improve the performance of the overall classifier in both accuracy and speed.
Tracking a person solely by his/her face is not robust enough for our assistant robot, because the face may not be always detected in the video streaming image. Thus, we track a person by his/her face (primary characteristic) and his/her shirt color (secondary characteristic). With inter-classifier feedback, the performance of the overall classifier is improved in terms of both accuracy and speed. The face recognizer is more robust to color changes caused by ambient brightness changes, and can suggest that the shirt classifier re-train itself under the new lighting condition. Since SIFT features are sensitive to directed lighting, the shirt classifier can suggest that the face recognizer add the misclassified face as additional training data. The face detector can be skipped every other frame to improve frame rate without hurting the person tracking because it is backed up by the shirt classifier.
The person-detection method described above was a key component of our RoboCup@Home 2007 competition entry. A quicktime video of our presentation in the finals can be watched here.