This article describes our initial steps towards creating complete Soccer Server clients using ML techniques. Starting with the ability to intercept a moving ball, we used a NN to teach the client this low-level skill which is a prerequisite for executing more complex behaviors. This individual skill is an example of the most basic form of Multiagent Learning. Although the action is executed by a single agent, it only makes sense in an environment in which other agents exist: without other agents to kick the ball, the client would never have to intercept the ball head-on.
Building upon this individual behavior, we then used a DT to teach the client to evaluate the likelihood that a pass to a particular teammate would succeed. This evaluation represents a more conventional form of Multiagent Learning since several agents (both teammates and opponents) directly affect the passing action.
Finally, we implemented a set play involving several players and several uses of the learned behaviors. This set play verified that the learned behaviors are useful in a game-like situation.
We chose both the low-level skill and the higher-level decision because of their usefulness in building up higher levels of behavior. Just as the ball-interception skill was used by many of the participants in the passing behavior, the knowledge of whether or not a teammate is likely to be able to receive a pass will be useful for clients when deciding whether to pass, dribble, or shoot. As we continue to layer learned behaviors on top of each other, it will be interesting to study how the different learning methods interact with each other. Keeping MAS, and particularly Multiagent Learning, as our research focus, we will continue moving up to higher-level behaviors until we have created a complete team of Soccer Server clients with learned behaviors. At the same time, we will continually apply methods that succeed in simulation to our physical robotic system.