Project Topics and Readings
- CS 395T: Intelligent Robotics (Spring 2007)
- This is an addendum to the
The Intelligent Wheelchair is an intelligent robot, sensing and
learning about the environment it travels in. This role leads to one
set of problems, related to how knowledge of the environment is
represented, learned, stored, retrieved, and used.
The Intelligent Wheelchair is also a mobility aid for a human driver,
able to act autonomously, but always subordinated to the human driver.
This leads to a second set of problems, related to how the human-robot
interface can increase the human driver's autonomy while performing a
Each student will pick a project topic from one of these two sets.
I would like the class to divide approximately equally between the
Spatial knowledge representation, exploration, and mapping
Our Hybrid Spatial Semantic Hierarchy (HSSH) approach describes the
world in terms of several distinct representations for spatial
- Local metrical maps
- Local topology
- We have methods for identifying gateways in the local
environment map, and using them to describe the local
[Beeson, et al, ICRA, 2005]
- Global topological maps
- Global metrical maps
If you pick a topic in this area, your term project will be to extend
the robotic capabilities of the Intelligent Wheelchair in this area.
You will search and review the relevant literature, looking for useful
methods, and you will give a presentation on the background knowledge
that you will draw upon to solve your problem
- Robot body sense: self-calibration of robot geometry,
motion parameters, laser and camera parameters, all against
each other. Make a clear and minimal connection between
internal sensory parameters, and external cultural ones.
- find papers on self-calibration
- Local motion planning: investigate modern planning methods
such as RRTs, PRMs, D* (fast replanning), SVM planning,
RL and policy learning, etc.
- High-performance local control for specialized
environments like doors, ramps, desks, elevators, toilet
stalls, etc. Effects of latencies on control.
- Vision-based localization and control: vision-based
reactive control; vision-based 6 dof localization using
tracking of known point or line landmarks.
- Dynamic obstacles: recognizing them, eliminating them
for purposes of localization, modeling them, etc. Safe control
in the presence of dynamic obstacles, such as moving through
crowds during class-change time.
- Mapping with common architectural features such as
straight walls, right angles and rectangular rooms. This
applies the methods of FastSLAM, but to fixed landmarks that are
more structured than points. This should allow higher-level
constraints among these structured objects.
- Sensor fusion with vision and laser rangefinders. Different
types of information from vision (SIFT features, dense stereo, etc).
- Hazard detection, especially for overhangs and drop-offs.
- Optimal layout of places in a global frame of reference
using estimated metrical displacements from exploration.
- Probability distribution over the tree of possible maps by
comparing observed displacements with the optimal layout of a
given topological map. Demonstrate (with the 90,000-place GIS
map) that large-scale topological mapping is tractable.
- Using existing graphical maps as planning resources.
- (research on understanding maps and diagrams)
- The 90,000-place GIS map of Austin, Texas
Human-robot interaction (HRI)
The human driver (H) and the Intelligent Wheelchair (W) form a
partnership between two cooperating agents. Each has their own
capabilities, perceptions, knowledge of the world, and degree of
autonomy, but the H is the dominant member of the partnership, and W
is subordinate. How can they best work together?
Such a relationship relies on trust, and trust is built and maintained
through communication, as well as by observing success and failure.
How do we think about these issues when one member of the relationship
is an intelligent robot?
- What kinds of trust does H need to have in W?
- How does W earn H's trust?
- W obeys orders from H. Does W need to build a model of H?
- What kinds of communication are necessary from H to W?
- What kinds of communication are necessary from W to H?
The Intelligent Wheelchair (W) takes actions, but only subordinate to
the autonomy of the human driver (H). W can observe its environment,
and can learn an increasingly accurate map while it travels, even
though it travels only when and where H instructs. W can make plans
and use its own autonomy to carry them out, but only subordinate to
Our work is based on the hypothesis is that humans represent spatial
knowledge at several distinct ontological levels. These levels of
knowledge representation correspond to three distinct levels of
human-robot communication: Control, Command, and Goal.
- Control level interface
- We currently use the joystick to communicate the intended
destination of travel. The robot plans a trajectory to
avoid hazards on the way.
- Command level interface
- We have "turn" commands to select a path in the local topology,
and "travel" commands to move along a path-segment to the
- Goal level interface
- Our plan is for the driver to specify a known place in the
environment. The Intelligent Wheelchair plans a route
to that place and carries out the plan.
- Natural language interaction
- The Intelligent Wheelchair must have considerable autonomy,
but always under the "executive control" of the human driver.
This leads to several more concrete questions about the human-robot
interaction and the interfaces that support them.
- How should W interpret Control level instructions from H?
- What is the syntax and semantics of the "joystick language"
that H uses to express local control goals to W?
- How should W interpret Command level instructions from H?
- How well can W identify the qualitative decision structure
of local place neighborhoods?
- How should W interpret Goal level instructions from H?
- How can H's intentions be expressed through different
physical interface devices?
- How can we provide safety/comfort guarantees for W's behavior?
- How can W accept more natural, extended, human-like directions?
- How should the relationship between H and W develop?
- How is trust earned, maintained, lost, and recovered?
If you pick a topic in this area, you are selecting a major project or
research group in the Human-Robot Interaction area. You are
responsible for reading essentially everything that your group has
done, and giving a presentation to the class evaluating and
summarizing their contributions to HRI, and especially for clarifying
what that group's work has contributed to the problems we need to
solve. Your term project will be to apply what you have learned from
your project or group to the Intelligent Wheelchair. You can think of
your task as answering the questions: What can we learn from this
research? How does it help us achieve our goals? Where's the
ROLLAND, U. Bremen, Germany
NurseBot project, CMU/Pitt/UMich
Richard Simpson, AT Sciences and U. Pittsburgh
Maja Mataric, USC Interaction Lab
KTH, Sweden, Human interaction with mobile service robots
Marjorie Skubic, U. Missouri-Columbia
- Greg Trafton, NRL. (For citations of his papers, look at
Nick Cassimatis' publications, under "Robotics".
Then get the papers from the UT Library.)
Robonaut, NASA JSC
Mars Rover, human-robot interaction
Cynthia Breazeal, MIT Media Lab
Holly Yanco, U. Mass. Lowell
Center for Robot Assisted Search & Rescue, U. South Florida
- Japanese and Korean work on human-robot interaction
- Vulcan: our physical robotic wheelchair, with
laser rangefinder and stereo cameras.
- Vizard: virtual reality environment for testing
the Intelligent Wheelchair software.
- City of Austin GIS database: for large-scale testing
of topological mapping methods.
This page provides starting points for you to find literature
relevant to your topic. You should expect to search for relevant
material. Some researchers, particularly in academia, are very good
about making their publications easy to find and available online.
Others made an effort at one point, but the list is now significantly
out of date. Some places have web pages that are essentially PR
brochures, without useful publications. And some researchers don't
even try. Your task is to find the gold, or establish it's not there.
Follow references, search with Google and Google Scholar, look through
tables of contents of relevant journals, and so on. (The UT Library
has an excellent collection of online journals.) Find
relevant sources that tell you about the overall structure of the
field. Special issues of journals are often quite helpful. One on
Human-Robot Interaction (HRI) is IEEE SMC-C 34(2), May 2004.