next up previous
Next: Evaluation Up: Technical Issues Previous: Action

Mapping from perception to action

There are several approaches to implementing the control mechanisms which perform the given task. A conventional approach is first to reconstruct the geometrical model of the environment (ball, goal, other agents etc.), then deliberate a plan, and finally execute the plan. However, this sort of approach is not suitable for the dynamically changing game environment due to its time-consuming reconstruction process.

A look-up table (LUT) indicating the mapping from perception to action by whatever method seems suitable for quick action selection. One can make such an LUT by hand-coding given a priori, precise knowledge of the environment (the ball, the goals, and other agents) and the agent model (kinematics/dynamics). In a simple task domain, a human programmer can do that to some extent, but seems difficult to cope with all possible situations completely. An opposite approach is learning to decide action selection given almost no a priori knowledge. Between exist several variations with more or less knowledge. The approaches are summarized as follows: (1) complete hand-coding (no learning), (2) parameter tuning given the structural (qualitative) knowledge (self calibration), (3) Subtask learning fit together in a layered fashion [Stone and Veloso1997] (4) typical reinforcement learning such as Q-learning with almost no a priori knowledge, but given the state and action spaces, (5) action selection from the state and action space construction [Asada et al. 1996a, Takahashi et al. 1996], and (6) tabula rasa learning. These approaches should be evaluated in various kinds of viewpoints.



Peter Stone
Tue Sep 23 10:25:58 EDT 1997