Due dates

1/30
Select Topic
3/1
Project proposal due
5/3
Term project due


Robotics projects


(1) Self calibration and health monitoring for an intelligent wheelchair

Tom Lauzon
thomas@lauzon.org
Presentation: 3/1/07
(1)

This project encompasses the question of deciding how efficient the robot's sensors and actuators are at a given time and what strategies should the robot develop to possibly correct its deficiencies. The objective will first be to make the robot sense how its sensors evolve with displacement due to the command of its motors. Once it has represented how the sensors are supposed to react, it can try to determine if a sensor or actuator is not responding correctly. Further, an efficient method for recording health issues shall be designed. Based on its health status, the robot shall adopt an appropriate corrective measure.

Matlab models of the wheel-chair, especially odometry, range finder, motors, as well as a 2D model environment will be required for the simulation of the algorithms.


(2) High performance local control

Chris Flesher
chris.flesher@gmail.com
Presentation: 3/6/07
(2)

I plan to develop a high performance control system for our intelligent wheelchair Vulcan. In order to accomplish this I will model the system dynamics, parameterize the system model, and develop a suitable control system / path planner. I am planning on testing the resulting control system in a variety on situations, some of which may include commanding the vehicle to move through a doorway, into an elevator, or onto an elevator with dynamic obstacles. I hope to have the basic dynamic model and controller done fairly soon so that other people might be able to incorporate it into their projects. If all goes according to plan I would also like to develop an algorithm to track and avoid dynamic obstacles.

For my simulation environment I will be using a custom built simulator I have been working on for the Robotics and Automation Society here at UT. It is a simple 2-D simulator I that I am familiar with. I'm choosing this simulator because it allows me to modify major parts of the simulation without much hassle. It is still kind of buggy so I probably wouldn't recommend that anyone else in the class use it (unless you are really really interested). It might end up being a pretty neat simulation environment after the semester is over though. The basic architecture is composed of several mini-applications written in Python and networked together using IceStorm (so you could actually interface with the code using a number of languages including C/C++, Java, C#, Ruby, etc.) and takes advantage of Wykobi (a C++ computational geometry library) to do the sensor / obstacle collision calculations.


(3) High performance control using visual information

Shilpa Gulati
gulati@mail.utexas.edu
Presentation: 3/8/07
(3)

Objective:

Demonstrate "good" performance of the wheelchair in executing specific control actions using visual information from a stereo vision camera mounted on the wheelchair. Measures to characterize "good" performance will also be defined. Specifically, this will involve:

  1. Formulate 1-2 specific control tasks.
  2. Identify the form of the control laws based on the desired performance.
  3. Identify the information required from the environment for formulating the control laws.
  4. Analyze visual data to identify features that can reliably provide the information in step (3).
  5. Formulate the control laws and implement them in simulation and finally on vulcan.

Testing

There are two ways of testing
  1. Model the environment of the control task in a VR environment. Simulate the motion of the wheelchair and get simulated stereo vision data. The VR environment should provide:
    1. Ability to model the dynamics of motion of the wheelchair.
    2. Correct simulation of stereo vision as the chair moves (taking into account pose, velocity and acceleration).
  2. Get stereo image data from the wheelchair as it is manually made to perform the selected control-level actions and use a simplified simulation environment that models the features extracted from the real data.

Preference of Testing Method

I would prefer to have (1) since the data can be gathered easily, and under a wide variety of pose, velocity and acceleration conditions. If (2) has to be used, the focus will be more on the low-level visual image processing aspect (how to get useful information when lighting conditions vary, or when there are shadows etc.). However, (2) is essential for getting a real-world implementation.


(4) Local motion planning

Todd Hester
todd@cs.utexas.edu
Presentation: 3/20/07
(4)

Project Description

Many disabled persons who require the use of a wheelchair have difficulty using one due to other disabilities such as blindness or low vision. Developing an intelligent wheelchair that would navigate without their direct control would greatly improve their quality of life. Such a wheelchair needs to find a quick and safe path to the user's desired destination even when faced with small spaces or dynamic obstacles such as pedestrians. The user of the wheelchair should be allowed to specify what criteria they want use to optimize the path, such as distance, speed, safety, or comfort. I propose to extend the existing E* path planning algorithm to better fit the needs of the wheelchair by incorporating orientation into its path planning as well as optimizing paths for various criteria. These improvements would make the method more applicable to the wheelchair and would make the wheelchair more beneficial to its users.


(5) Efficient spatial representation and path finding for a dynamic universe

Padmadevan Chettiar
padmadevan@gmail.com
Presentation: 3/22/07
(5)

Problem Statement

In this project I intend to develop an efficient way to represent a two dimensional dynamic universe. Once I have a solid space-time representation of the world, I plan to implement a generic path finding algorithm which will take into account the dynamics of the moving bodies.

Objects in this world will have position, orientation and velocity components. The robot will constantly update the probability distribution of the position of the moving objects and will use this information for efficient path finding. A satisfactory level of intelligence will also be required on the part of the robot to achieve this kind of motion planning which takes into account the future positions of objects.

I intend to implement this project in such a way that extension to the third dimension will take minimal effort. Also, in such a way that once the simulation runs satisfactorily, it wouldn't be very difficult to make the transition to the real world.

Platform

I have chosen to implement this project using the Player/Stage multiple robot simulator. Stage is capable of simulating mobile robots in a two dimensional world. And Player provides accurate models of sensors and other robot hardware. This combination, I guess, would help me in creating a simulation that is as close to the real world as possible.

The requirements of a simulation engine are very basic. The engine should be able to simulate any kind of two dimensional world (indoor, outdoor, etc) and any kind of moving object. Further, the robot should be able to sense the approximate distance between itself and surrounding objects. I am not going to concentrate on colors and visual appearance. All the decisions of the robot will be based only on whether a particular space will be occupied or not at a given time. So, a reasonable model of a laser range finder should suffice.


(6) Safety in Blind Corners: Helping the Intelligent Wheelchair Navigate Gateways Safely

Kyle Cullen
kmc2256@mail.utexas.edu
Presentation: 3/29/07
(6)

The Intelligent Wheelchair can navigate though hallways, avoid objects, and map its environment. At the moment dynamic obstacles are avoided in the same manor as static obstacles. Dynamic obstacles can be avoided when they are seen, but if an obstacle jumps in front of the wheelchair then evasive action must be taken. Evasive action could be jarring experience to a blind or low vision driver. The most probable location for dynamic obstacles to appear would be the blind corners of intersections.

Since quickly avoiding obstacles could scare a blind or low vision driver then the wheelchair should plan for the possibility of an obstacle appearing out of all major blind spots. What safety guarantees need to be made about avoiding the probability of dynamic obstacles? At the moment the wheelchair does not make path planning adjustments to avoid the possibility of dynamic obstacles.

This paper suggests that an algorithm can be devised to allow the wheelchair to take a corner wider allowing for a greater viewing angle thus giving the wheelchair more time to stop in an emergency. What does the wheelchair loose if it takes the corner at the slowest speed and the widest turning radius? The Wheelchair might take ever corner so slow that it dramatically increases the traversal time. One method of decreasing the traversal time is to allow the wheelchair to dynamically decide the speed and turning radius it needs to safely negotiate the corner. This decision should be measured by the probability of a dynamic obstacle appearing in the blind part of the corner. Using this probability means there must be an inherent trade-off between traversal time and the safety guarantee. Taking advantage of this trade-off will allow for the greatest safety guarantee and the lowest traversal time combination.

In order to effectively test the path planning heuristic, an appropriate robot simulator will be built. This simulator we be able to show how the robot w= ill traverse the corner and allow data to be taken about the heuristic's performance. After performance data has been taken then it can be determined if the algorithm actually saves and time while allowing for a safe ride and keeping the drivers trust.


(7) Fusion of laser and vision

Greg Freeman
greg.freeman@mail.utexas.edu
Presentation: 4/5/07
(7)

This projects seeks to exploit the fusion of laser range finding data and stereo vision data to assist an intelligent wheelchair develop a map of sidewalks in an outdoor campus environment. It is important for Vulcan, an intelligent wheelchair, to be able to also navigate outdoors since its goal is to provide autonomy for individuals with mobile and vision impairments. Sensor responses in outdoor environments are different than in indoor environments and therefore many of the techniques used to navigate must be adjusted. Vision data will be needed to determine sidewalk paths. The laser range data can be combined with the video for localization and providing a local metric map. This project will develop an algorithm to generate a local metric map of sidewalks and evaluate its performance with recorded data from the wheelchair sensors.


(8) Visual recognition of traffic lanes and intersections

Mikhail Iakhiaev
mikhailai@gmail.com
Presentation: 4/10/07
(8)

The problem to be solved by this project concerns an automated vehicle driving on a roadway. The vehicle is given a global map of the environment, represented as a set of segments, connected to each other. The segments consist of one or more driving lanes, each is represented as a series of waypoints with implied line segments between them. The waypoints are specified via GPS coordinates which gives both the topological and metrical map. However, the GPS data will not be fully accurate, and sometimes incomplete (there might be large gaps between waypoints which contain road curves or even intersections). Therefore, to accomplish the task of driving, the vehicle must be able to follow the road and turn correctly at the intersections. So the project will focus on providing the following capabilities to the vehicle:

The means of realizing the task should be machine vision using the video cameras, but any other available sensory data (e.g. laser rangefinders) should be used whenever it helps to solve the problem.


(9) Mobile robot localization in 3D

Tekin Mericli
tekin.mericli@gmail.com
Presentation: 4/12/07
(9)

The aim of this research is to get the mobile robot localized in a room by detecting and tracking useful static visual features, and using them for extracting its relative position. Combination of the estimate of the relative position of the robot to each of those features and the relative positions of the features to each other will pinpoint the location of the robot in the 3D space (room). The corners, where two walls and the ceiling are met, may be used as landmarks to perform triangulation. Particle filtering may be appropriate to use in order to increase the robustness of the position estimate. Either stereo or mono images can be used. Having a depth map may be useful for fine-tuning but it is also possible to get localized using a single camera. When the project is completed, the robot will be able to look around for a while and estimate its position in the room.

Simulation:

The task of localization in 3D space requires a 3D simulator. In that case Player/Stage will not be sufficient. One possibility is using Gazebo. However, I would suggest using USARSim (Urban Search and Rescue Simulator). It is a high-fidelity simulation of urban search and rescue (USAR) robots and environments based on the Unreal Tournament game engine. It is intended as a research tool and is the basis for the RoboCup USAR simulation competition. Various robot and sensor models are available and new models can be added to the simulator. Unreal Tournament is using Karma physics engine, which is one of the most powerful physics engines, and its amazing 3D graphics provide realistic environments. Here is a link of screenshots of the simulation of the officially used rescue areas and some of the robots: http://usl.sis.pitt.edu/ulab/usarsi1.jpg http://usl.sis.pitt.edu/ulab/usarsi2.jpg


(10) Reading the Writing on the Wall:
Visual Localization for the Intelligent Wheelchair

Jeremy Stober
stober@cs.utexas.edu
Presentation: 4/17/07
(10)

Problem Statement

My project will focus on robot localization in large scale space using visual information. Given a set of visual images from various places, I propose a method of classifying these places based on local invariant features, recognizable objects, and text in the visual field.

Systems using local invariant features have performed well in visual topological localization tasks. However, local invariant features are identified autonomously and often do not relate to environmental features that make sense for a human operating the wheelchair.

As such, the inclusion of recognizable objects and text extraction in the process of localization opens up the possibility of cooperation between the operator and the wheelchair. Text extraction from wheelchair vision makes it possible to label places in a way that has meaning for the wheelchair operator. Signage, as present both indoors and out, typically represents a discriminatory time-invariant visual feature, and so may result in an improvement over current methods in visual localization.

Experimental Setup

The system will rely primarily on image processing components built on top of OpenCV. The training and test data will consist of digital photos of several indoor and outdoor intersections around campus covering a range of object/no object and text/no text scenarios.



HRI projects

(11) Design and evaluate a joystick-based gesture language

Alan Lockett
alan.lockett@gmail.com
Presentation: 4/19/07
(11)

Project Topic:

In order to provide its driver with low-level control, the Intelligent Wheelchair could use input directly from a joystick. But in areas that are difficult to navigate, such as narrow, jagged passages or a cluttered office, this form of control is not sufficiently efficient, since the rough-grained control input from the joystick would require the user to make multiple potentially rough manoeuvers. A better strategy would allow the driver to specify a direction or destination for local movement, for which the Wheelchair could plan a smooth path that avoids obstacles. Furthermore, for an unsighted individual, the lowest level of control is unrealistic and even dangerous; however, such an individual could use a level of control that allowed the specification of a direction or location so long as the wheelchair itself could avoid obstacles.

One possible mechanism for achieve object-avoiding direct control would be to develop a gesture language for using the joystick. Such a language would allow the driver to specify directions or locations with the joystick and would be compiled into sufficiently actionable commands for the Wheelchair. For example, a simple language might take the joystick input as specifying a distance and a direction to travel; object-avoiding routines would then be used to plan a path that reaching the location at the specified distance along the specified angle. A potential problem with this approach is that the driver may not understand the action of the wheelchair if it veers off of the expected path to avoid an obstacle. Thus an important design feature of a gesture language is that it should perform in a way that inspires the trust of the driver. For example, rather than specifying the location directly, a gesture language could parse a trajectory from the joystick input, and follow that trajectory to a location, while providing an error signal to the driver when such a trajectory would require substantial deviation from the trajectory in order to avoid an obstacle. Various candidate gesture languages (at least two, probably three or four) will be designed and implemented in order to compare them for ease of use, effectiveness, intuitiveness, and other valuable characteristics. These languages will then be evaluated with respect to each other in hopes of yielding a control methodologies that optimally matches the expectations of the driver with the need for a smooth, simple, intuitive driving experience.

Needs

The test environment will be simulated using Player Stage in order to develop a 3-D environment in which the robot can navigate. Occupancy maps will be provide to the robot directly, sidestepping the localization problem, which is tangential to this research. Tests will be performed using a USB joystick which will need to be obtained. It would be ideal if there were a trajectory-following object-avoiding control algorithm available out of the box; if not, this component will need to be developed. Also, if any software previously exists for interpreting joystick input, that would prove useful as well.


(12) Taking the helm: Effectiveness of input devices

Rick Bolkey
rbolkey@mail.utexas.edu
Presentation: 4/24/07
(12)

This project will test the effectiveness of varying hand held input devices for command level navigation. Different input device may be more natural to users. Some devices may lend themselves to semantics that are easier to learn while other devices could allow more expressive languages. Does it make a difference if the device is locked into the frame of the wheelchair or held freely by the driver. Potential devices to investigate could include the joystick, a gamepad, a touchpad, and a pointer. Some may lend themselves to different feedback. Joysticks can provide force-feedback and many gamepads "rumble".

This project will need an assortment of input devices as well as an environment to test the wheelchair navigation (player/stage should work). Will obviously also need many users to measure efficiency and gage input preferences.


(13) Shooting lasers from their fingertips: Communicating obstacle locations through force feedback

Brad Knox
bradknox81@gmail.com
Presentation: 4/26/07
(13)

Although assistive technology seeks to fill the needs of the handicapped, we must be careful that, in doing so, we do not create additional neediness. Rather, assistive technology should empower the user as much as possible. The autonomous wheelchair should be capable of acting without much user input, but it should also convey the information it gathers to the user. Specifically, the wheelchair's map, orientation knowledge, and odometry data contain a wealth of information that cannot be easily communicated to a visually impaired user. A force-feedback joystick is a possible means of communicating such information. I plan to use force feedback to communicate the presence and direction of obstacles and clear paths. Additionally, in a different mode, heading changes, forward displacement, portal locations, and obstacle occupancy areas will be communicated.


(14) Generating verbal descriptions to orient visually impaired Vulcan users

Charles Chen
chencl1@mail.utexas.edu
Presentation: 5/1/07
(14)

Problem Statement:

The purpose of my project is to create a program that will generate verbal descriptions so that visually impaired Vulcan users can get a sense of their environment. Users must know where they are in order to give Vulcan directions. For visually impaired users who cannot see what is going on around them, Vulcan will have to be able to describe their surroundings so that they can orient themselves. Since Vulcan already builds up a map and knows where it is in this map, it should be possible to take this information and generate a verbal description of the environment. Vulcan should be able to tell users that they are in a hall or in a room and list the exits; this would enable users to drive Vulcan by telling it what to do at each decision point.

At the most basic level, Vulcan should be able to say something like "You are in Hall #5. There are 3 gateways: North gateway #1. East gateway #2. South gateway #3. West gateway #4." To help make the place IDs more memorable, computer vision could be used to determine the dominant color of the place. That color description can be added to the start of the place and give the user one more piece of information to help identify the place. So instead of simply saying "Hall #5", Vulcan could say "Green Hall #5". Also, users will be allowed to tag the places by renaming them to something that makes more sense to them; it may be easier for them to remember "Main Hall" than "Green Hall #5". In addition, it may be helpful for users to hear where each of the gateways lead. Users should be able to plan and make decisions better if they hear something like, "North gateway #1 leads to Main lobby. East gateway #2 leads to Intelligent robotics research lab. South gateway #3 leads to Beige Hall #4. West gateway #4 leads to Auditorium."

Since it is more natural for most people to think in terms of "left", "right", "forward", and "back", rather than in terms of the cardinal directions, there should be an option for Vulcan to speak in those terms. However, since those terms can be ambiguous if there are multiple gateways on the same side of the wall and because cardinal directions will keep gateways for a particular room listed in the same order regardless of where the user enters it from, the ability to describe gateways in terms of the cardinal directions should probably still be present.

Evaluation:

  1. Build a map with the complexity of a typical building in a university in the VR simulator and generate a set of verbal descriptions for the map.

  2. Divide subjects into 6 groups: users who see the VR simulator, users who only hear the choices they have to make without any descriptions, users who hear the basic automatically generated descriptions, users who hear the basic automatically generated descriptions along with color information, users who hear the tagged descriptions, and users who hear the tagged descriptions along with information about where each of the gateways leads.

  3. For all 6 groups, have them navigate through the map in its entirety until they feel oriented -- the time and number of visits to each spot needed before they feel oriented will be recorded. Then assign them tasks of navigating to a particular spot -- the time and number of wrong turns taken will be recorded. All navigation will be done by asking the user for a choice at a decision point. Also, for the groups that will have audio descriptions, they will be able to switch freely between the cardinal direction descriptions or the left-right-forward-back descriptions. How often they use each mode and under what circumstances will be recorded.

Their ability at learning the map initially and at completing tasks successfully will be a measure of how useful verbal descriptions will be for Vulcan users. The expectation is that users who hear only the decision points without any descriptions will be completely lost, users who see the VR simulator will be the best oriented, and users who have descriptions will do almost as well as the ones that can see the VR simulator (with the users who hear color information, tagged descriptions, and/or destinations for each of the gateways having a distinct advantage over the users who do not).

Also, their preference for either the cardinal direction or left-right-forward-back descriptions will be evaluated by how often they use each type. This will help us better understand which type is more useful, whether there is truly any value in having both modes, and if there are any special situations which would cause a reversal in preference.



BJK