A modular reinforcement learning model for human visuomotor behavior in a driving task (2011)
We present a task scheduling framework for studying human eye movements in a realistic 3D driving simulation. Human drivers are modeled using a reinforcement learning algorithm with "task modules" that make learning tractable and provide a cost metric for behaviors. Eye movement scheduling is simulated with a loss minimization strategy that incorporates expected reward estimates given uncertainty about the state of environment. This work extends a previous model that was applied to a simulation of walking; we extend this approach using a more dynamic state space and adding task modules that reflect the greater complexity in driving. We also discuss future work in applying this model to navigation and fi xation data from human drivers.
View:
PDF
Citation:
Proceedings of the AISB 2011 Symposium on Architectures for Active Vision. (2011), pp. 33-40.
Bibtex:

Dana Ballard Faculty dana [at] cs utexas edu
Leif Johnson Ph.D. Student leif [at] cs utexas edu
Brian T. Sullivan Ph.D. Alumni brians [at] mail utexas edu