next up previous
Next: Predictive Memory Up: The CMUnited-97 Simulator Team Previous: Introduction

Team Member Architecture


Our new teamwork structure is situated within a team member architecture suitable for domains in which individual agents can capture locker-room agreements and respond to the environment, while acting autonomously. Based on a standard agent architecture, our team member architecture allows agents to sense the environment, to reason about and select their actions, and to act in the real world. At team synchronization opportunities, the team also makes a locker-room agreement for use by all agents during periods of low communication. Figure 1 shows the functional input/output model of the architecture.

Figure 1: The team member architecture for PTS domains.

The agent keeps track of three different types of state: the world state, the locker-room agreement, and the internal state. The agent also has two different types of behaviors: internal behaviors and external behaviors.

The World State
reflects the agent's conception of the real world, both via its sensors and via the predicted effects of its actions. It is updated as a result of processed sensory information. It may also be updated according to the predicted effects of the external behavior module's chosen actions. The world state is directly accessible to both internal and external behaviors.

The Locker-Room Agreement
is set by the team when it is able to privately synchronize. It defines the flexible teamwork structure as presented below as well as inter-agent communication protocols. The locker-room agreement may change periodically when the team is able to re-synchronize; however, it generally remains unchanged. The locker-room agreement is accessible only to internal behaviors.

The Internal State
stores the agent's internal variables. It may reflect previous and current world states, possibly as specified by the locker-room agreement. For example, the agent's role within a team behavior could be stored as part of the internal state, as could a distribution of past world states. The agent updates its internal state via its internal behaviors.

The Internal Behaviors
update the agent's internal state based on its current internal state, the world state, and the team's locker-room agreement.

The External Behaviors
reference the world and internal states, sending commands to the actuators. The actions affect the real world, thus altering the agent's future percepts. External behaviors consider only the world and internal states, without direct access to the locker-room agreement.

Internal and external behaviors are similar in structure, as they are both sets of condition/action pairs where conditions are logical expressions over the inputs and actions are themselves behaviors as illustrated in Figure 2. In both cases, a behavior is a directed acyclic graph (DAG) of arbitrary depth. The leaves of the DAGs are the behavior types' respective outputs: internal state changes for internal behaviors and action primitives for external behaviors.

Figure 2: Internal and external behaviors are organized in a directed acyclic graph.

Our notion of behavior is consistent with that laid out in [4]. In particular, behaviors can be nested at different levels: selection among lower-level behaviors can be considered a higher-level behavior, with the overall agent behavior considered a single ``do-the-task'' behavior. There is one such top-level internal behavior and one top-level external behavior; they are called when it is time to update the internal state or act in the world, respectively. The team structure presented in Section 5 relies and builds upon this team member architecture.

next up previous
Next: Predictive Memory Up: The CMUnited-97 Simulator Team Previous: Introduction

Peter Stone
Sun Dec 7 06:54:15 EST 1997