game1-16

Color Learning on a Mobile Robot: Towards Full Autonomy under Changing Illumination

Color Learning on a Mobile Robot: Towards Full Autonomy under Changing Illumination.
Mohan Sridharan and Peter Stone.
In The 20th International Joint Conference on Artificial Intelligence, pp. 2212–2217, January 2007.
IJCAI-07

Download

[PDF]129.5kB  [postscript]184.9kB  

Abstract

A central goal of robotics and AI is to be able to deploy an agent to act autonomously in the real world over an extended period of time. It is commonly asserted that in order to do so, the agent must be able to learn to deal with unexpected environmental conditions. However an ability to learn is not sufficient. For true extended autonomy, an agent must also be able to recognize when to abandon its current model in favor of learning a new one; and how to learn in its current situation. This paper presents a fully implemented example of such extended autonomy in the context of color map learning on a vision-based mobile robot for the purpose of image segmentation. Past research established the ability of a robot to learn a color map in a single fixed lighting condition when manually given a ``curriculum'', an action sequence designed to facilitate learning. This paper introduces algorithms that enable a robot to i) devise its own curriculum; and ii) recognize for itself when lighting conditions have changed sufficiently to warrant learning a new color map.

BibTeX Entry

@InProceedings{IJCAI07-mohan,
author="Mohan Sridharan and Peter Stone",
title="Color Learning on a Mobile Robot: Towards Full Autonomy under Changing Illumination",
BookTitle="The 20th International Joint Conference on Artificial Intelligence",
month="January",year="2007",
pages="2212--2217",
abstract={
A central goal of robotics and AI is to be able to
deploy an agent to act autonomously in the real
world over an extended period of time. It is
commonly asserted that in order to do so, the agent
must be able to \emph{learn} to deal with unexpected
environmental conditions. However an \emph{ability}
to learn is not sufficient. For true extended
autonomy, an agent must also be able to recognize
\emph{when} to abandon its current model in favor of
learning a new one; and \emph{how} to learn in its
current situation. This paper presents a fully
implemented example of such extended autonomy in the
context of color map learning on a vision-based
mobile robot for the purpose of image
segmentation. Past research established the ability
of a robot to learn a color map in a single fixed
lighting condition when manually given a
``curriculum'', an action sequence designed to
facilitate learning. This paper introduces
algorithms that enable a robot to i) devise its own
curriculum; and ii) recognize for itself when
lighting conditions have changed sufficiently to
warrant learning a new color map.
},
}

Valid CSS!
Valid XHTML 1.0!