CS395T: Autonomous Robots -- Assignment 3

Assignment 3: Low-Level and High-Level Vision


This assignment has been designed with the objective of giving you a feel for the problems with doing visual processing on a constrained mobile robot domain. The assignment consists of two parts, Part I focuses on the problem of Color Segmentation while Part II introduces you to the process of Object Recognition. The associated code and data for the entire assignment can be found at: /robosoccer/assignments/prog3/ on Vieri. All directories mentioned below refer to sub-directories under this directory.

Both parts of the assignment are to be turned in by the due date but some sections may take considerable more work than the others - Part I is definitely simpler than Part II. Pace yourself accordingly - please do not leave the tasks to the day before it is due.

Part I - Color Segmentation.

Your job here is to develop a Classifier that performs Color Segmentation.

I(a) Normal Testing conditions:

I(b) Changed Illumination conditions:

Just to give you a feel for the sensitivity of color segmentation to illumination changes, try your classifier on the set of images captured under a different set of illumination conditions -- directory Illum2/. Your results will probably not be good, but don't worry about it - there is no need to tune your classifier for the different conditions.

Deliverables:

Part II - Object Recognition.

Here, the objective is to design a system that recognizes objects in the input images. At this point, you are given a vision system that can perform low-level visual processing to a reasonably good degree. This implies that the vision system works up to the stage where the output is a set of bounding boxes for the various candidate blobs in each image. It also provides a separate list of blobs corresponding to each color. The properties of the blobs (that can be used for further processing) are explained in the team tech-report. You can either decide to use the entire system or only the output from the color segmentation stage.
Note: All the testing for this section should be done on the larger field with all the (overhead) lamps switched on. Ask somebody at the lab if you are not sure of operating the lamps. Remember to switch them on only when you need to use them. But, please do not frequently turn the lamps on and off. Also, while turning them on or off, please do so slowly. Hopefully, you have realized by now that color segmentation is sensitive to illumination changes. Remember, if the any of the lamps need to be replaced, everybody shall have to wait until that is done. The system is set up to provide output in two forms:
  1. The system combines the vision system with the LED display system and lights up specific LEDs when specific objects are seen on the image. The appropriate flags have been provided in the vision module and if they are set to be true the LEDs shall automatically light up.
  2. The system provides the position in the image plane for the various objects seen in the image plane at any given instance of time. For printing it out to screen, you can use the OSYSPRINT function and telnet to the robot. If you are interested, you can also calculate the position of the objects relative to the robot.
To get from the given code to the desired outputs, you first need some information on the code:
Next, some information on the UTAssist debugging tool: For some more information on UTAssist, take a look at this page: UTAssist Info. Some of the information there may be outdated, such as the numbers on the sizes of various messages. You shall get more information and a demo on UTAssist in class.

Finally, some information on the LED interface that has already set up for you:

Deliverables:


[ Back to Class Home page ]

Page maintained by Peter Stone and Dan Stronger
Questions? Send me mail