Active from 2003 - 2007
A major current challenge in reinforcement learning research is to extend methods that work well on discrete, short-range, low-dimensional problems to continuous, highdiameter, high-dimensional problems, such as robot navigation using high-resolution sensors. Using SODA an robot in a continuous world can, with little prior knowledge of its sensorimotor system, environment, and task, improve task learning by first using a self-organizing feature map to develop a set of higher-level perceptual features while exploring using primitive, local actions. Then using those features, the agent can build a set of high-level actions that carry it between perceptually distinctive states in the environment. SODA combines a perceptual abstraction of the agent�’s sensory input into useful perceptual features, and a temporal abstraction of the agent�’s motor output into extended, high-level actions, thus reducing both the dimensionality and the diameter of the task.
Jefferson Provost Ph.D. Alumni jefferson provost [at] gmail com
Self-Organizing Distinctive State Abstraction Using Options 2007
Jefferson Provost, Benjamin J. Kuipers, and Risto Miikkulainen, In Proceedings of the 7th International Conference on Epigenetic Robotics 2007.
Developing navigation behavior through self-organizing distinctive state abstraction 2006
Jefferson Provost, Benjamin J. Kuipers, and Risto Miikkulainen, Connection Science, Vol. 18 (2006), pp. 159-172.
Self-Organizing Perceptual and Temporal Abstraction for Robot Reinforcement Learning 2004
Jefferson Provost, Benjamin J. Kuipers and Risto Miikkulainen, In AAAI-04 Workshop on Learning and Planning in Markov Processes 2004.
Toward Learning the Causal Layer of the Spatial Semantic Hierarchy using SOMs 2001
Jefferson Provost, Patrick Beeson, and Benjamin J. Kuipers, In AAAI Spring Symposium Series, Learning Grounded Representations 2001.