Online Model Learning in Adversarial Markov Decision Processes (Extended Abstract) (2010)
Consider, for example, the well-known game of Roshambo, or rock-paper-scissors, in which two players select one of three actions simultaneously. One may know that the adversary will base its next action on some bounded sequence of the past joint actions, but may be unaware of its exact strategy. For example, one may notice that every time it selects P, the adversary selects S in the next step; or perhaps whenever it selects R in three of the last four steps, the adversary selects P 90 percent of the time in the next step. The challenge is that to begin with, neither the adversary function that maps action histories to future actions (may be stochastic), nor even how far back it looks back in the action history (other than an upper bound) may be known. At a high level, this paper is concerned with automatically building such predictive models of an adversary's future actions as a function of past interactions.
In Proceedings of the Ninth International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 1583–-1584, May 2010.

Doran Chakraborty Ph.D. Alumni chakrado [at] cs utexas edu
Peter Stone Faculty pstone [at] cs utexas edu